. .. ; it. s : um. ‘ m, .u u m . 8.“ H.533 a005- Bll’li‘lW’ LIBRARY University Michigan State This is to certify that the thesis entitled Real-Time Image Processing of Magneto-Optic Images for the Magneto-Optic/Eddy Current lmager (MOI) presented by Jason Stashonsky Slade has been accepted towards fulfillment of the requirements for the MS. degree in Electrical Engineering Media Jag/W Major Professor’s Signature 5//.2/03 Date MSU is an Affirmative Action/Equal Opportunity Institution __.— a"4_ _ _ g _ ,__. __,.fi,_ __ _ - r—v - ’__fi'.—~—... o—‘w‘ '. v-rv— -‘ O. ‘ -0" —- V i— .- PLACE IN RETURN BOX to remove this checkout from your record. TO AVOID FINES return on or before date due. MAY BE RECALLED with earlier due date if requested. DATE DUE DATE DUE DATE DUE Mim 6/01 cJClRC/DateDuopGScp. 15 REAL-TIME IMAGE PROCESSING OF MAGNETO-OPTIC IMAGES FOR THE MAGNETO-OPTIC/EDDY CURRENT IMAGER (MOI) By Jason Stashonsky Slade A THESIS Submitted to Michigan State University in partial fulfillment of the requirements for the degree of MASTER OF SCIENCE Department of Electrical and Computer Engineering 2003 ABSTRACT REAL-TIME IMAGE PROCESSING OF MAGNETO-OPTIC IMAGES FOR THE MAGNETO-OPTIC/EDDY CURRENT IMAGER (MOI) By Jason Stashonsky Slade The Magneto-Optic/Eddy Current Imager (MOI) is a nondestructive evaluation technique developed as an alternative to eddy current probes for the detection of cracks and corrosion on aircraft surfaces. The principles of this technique are a combination of eddy current excitation and magneto-optic imaging that yield real-time images of local magnetic fields associated with the induced currents. The magneto-optic images produced by the M01, while sufficient, do contain noise that degrades the overall image quality. The noise appears as snake-like lines and is caused by the magnetic domain boundaries in the MOI’s garnet sensor. The primary goal of this thesis is to develop real- time filtering algorithms to remove the unwanted serpentine noise present in the magneto-optic images. The benefits of removing this unwanted noise include the detection of smaller defects and increased probability of detection (POD) of a flaw. There are a number of requirements pertaining to the image enhancement algorithms including noise removal, edge preservation, low cost and high processing speed. AS the inspections are preformed on-line, near real-time processing is a necessary condition to ensure that the inspector sees the filtered image instantaneously. Dedicated to my parents, David Slade and Sherry Stashonsky. Thank you for your love, support and for instilling me with a desire to learn. iii th d: (IQ ACKNOWLEDGEMENTS I would like to thank Dr. Lalita Udpa for the research opportunity she gave me in the Nondestructive Evaluation Laboratory and for her continued support during the development of this thesis. She has been an excellent advisor and mentor, and the knowledge I have gained while working with her has been invaluable. I would also like to thank Dr. Satish Udpa and Dr. Pradeep Ramuhalli for the guidance they have given me during my graduate career at Michigan State University. I would like to thank fellow NDE laboratory and M01 project team members, Unsang Park and Yuan Fan, for their assistance on the project. I would also like to express my gratitude to Dr. William C. L. Shih and Gerald Fitzpatrick of PR1, Inc. for supporting my research. This thesis was supported by the Federal Aviation Administration under award number DTFAOBOIFIAOSO. iv' TABLE OF CONTENTS LIST OF TABLES ...................................................................................................... vii LIST OF FIGURES ................................................................................................... viii CHAPTER 1: INTRODUCTION 1.1 Nondestructive Evaluation Techniques ...................................................... l 1.2 Visual Inspection ........................................................................................ l 1.3 Ultrasonic .................................................................................................... 2 1.4 Eddy Current Testing .................................................................................. 3 1.5 Magneto-Optic/Eddy Current Imager (MOI) .............................................. 6 1.6 Problem Statement ..................................................................................... 7 1.7 Thesis Organization ................................................................................... 7 CHAPTER 2: THEORY AND FUNCTIONALITY OF THE MAGNETO— OPTIC/EDDY CURRENT IMAGER (MOI) 2.1 Introduction ................................................................................................. 9 2.2 Excitation of Linear Eddy Currents ............................................................ 9 2.3 Excitation of Rotating Eddy Currents ....................................................... 11 2.4 The Faraday Magneto-Optic Effect .......................................................... 12 2.5 Magneto-Optic Sensors ............................................................................. 13 2.6 Domain Walls ........................................................................................... 14 2.7 Image Formation ....................................................................................... 15 2.8 Principles and Operation of the M01 System ........................................... 17 2.9 Conclusion ................................................................................................ 19 CHAPTER 3: IMAGE PROCESSING TECHNIQUES FOR MAGNETO-OPTIC IMAGES 3.1 Introduction ............................................................................................ 21 3.2.1 Moving-Average Filtering ..................................................................... 23 3.2.2 Geometric Mean Filter ........................................................................... 25 3.2.3 Harmonic Mean Filter ............................................................................ 25 3.2.4 Median Filter .......................................................................................... 26 3.2.6 Results of Spatial Filtering ..................................................................... 26 3.3 Multistage Filtering Algorithm .............................................................. 32 3.4.1 Contrast Enhancement using a Nonlinear Gray Level Mapping Algorithm (NLM) ................................................................... 35 3.4.2 Comparison with Histogram Equalization ............................................. 35 3.5.1 Threshold-Averaging Filter ................................................................... 39 3.5.2 Results of Implementing Threshold-Averaging Filter ........................... 41 3.6.1 Post-Processing Stages ........................................................................... 45 3.6.2 Results From Threshold-Averaging Filter and Post-Processing Stages ........................................................................... 47 3.7.1 Border Removal ..................................................................................... 51 3.7.2 Results of the Border Removal Stage .................................................... 54 3.8 Conclusion ............................................................................................. 56 CHAPTER 4: IMAGE QUALITY METRICS 4.1 Introduction ............................................................................................... 58 4.2 Subjective Fidelity Criteria ....................................................................... 59 4.3 Noise Removal Image Metric (SNR) ........................................................ 60 4.4 Edge Sharpness Metric 65 4.5 Conclusion ................................................................................................ 68 CHAPTER 5: HARDWARE AND SOFTWARE IMPLEMENTATION 5.1 Introduction ............................................................................................ 69 5.2 Algorithm Development and Testing System ........................................ 69 5.3.1 Proof-of-Concept System ....................................................................... 71 5.3.2 Proof-of-Concept Application Development ......................................... 75 5.4 Prototype System Requirements ............................................................ 77 5.5.1 Field Programmable Gate Array (FPGA)-Based Prototype System ................................................................................... 79 5.5.2 FPGA-Based Prototype Development Boards ....................................... 82 5.6.1 Digital Signal Processor (DSP)-Based Prototype System ................................................................................... 85 5.6.2 DSP-Based Prototype Development Boards .......................................... 86 5.7 DSP-based Development Board Prototype System ............................... 87 5.8.1 DSP and FPGA Comparisons ................................................................ 88 5.8.2 Speed ...................................................................................................... 88 5.8.3 Development Time and Programmability .............................................. 89 5.8.4 Price ....................................................................................................... 90 5.9 Conclusion ............................................................................................. 91 CHAPTER 6: CONCLUSION AND POSSIBLE FUTURE WORK 6.1 Conclusion ................................................................................................ 92 6.2 Possible Future Work ................................................................................ 95 BIBLIOGRAPHY ........................................................................................................ 96 vi Ia‘: Tab Tab LIST OF TABLES Table 4.1: Subjective Fidelity Criteria Scale .............................................................. 59 Table 4.2: SNR metric values for Threshold-Averaging and Moving-Average filters .............................................................................. 63 Table 4.3: Edge width values for the various filtering stages ..................................... 67 vii Figure 1.1: Figure 1.2: Figure 1.3: Figure 2.1: Figure 2.2: Figure 2.3: Figure 2.4: Figure 2.5: Figure 2.6: Figure 2.7: Figure 3.1: Figure 3.2: Figure 3.3: Figure 3.4: LIST OF FIGURES (a) Sample material undergoing ultrasonic testing (b) Resulting oscilloscope display of the reflected waves [3] ....................................... 2 Eddy current induction in a test material [3] ............................................ 4 Impedance-plane plot of a coil over a conducting nonferromagnetic test sample [36] ........................................................... 5 (a) Eddy current induction on a defect free region (b) Eddy current induction and corresponding B field components for sample with a rivet ........................................................................... 11 Transmission mode geometry of the Faraday magneto-optic effect [12] ............................................................................................... l3 Expansion of the domains in the presence of a magnetic field [4] ......... 15 Imaging of a single rivet using eddy current induction [12] .................. 15 Relatively clean MOI image showing four rivets and domain structures ................................................................................... 16 Photograph of the PRI MOI instrument including handheld sensor (left), excitation unit (center), and wearable monitor (right) ...... 17 Basic schematic of the M01 instrument [6] ........................................... 18 Typical magneto-optic image. The rivet on the left is defective with a crack located on its left side. The rivet on the rivet contains no abnormalities ......................................................... 22 Filtering mask for a 3 x 3 moving average filter with equal weight coefficients ........................................................................ 24 (a) Raw magneto-optic image before filtering (b) Image after moving-average filtering using a 7 x 7 window ............................ 27 (a) Raw magneto-optic image before filtering (note that the domain structures in this sample are minimal (b) Image after moving average filtering using a 7 x 7 window ............ 28 viii Fig Fig Fig Fig Fig Fig Fig Fig Fig Fig Figure 3.5: Figure 3.6: Figure 3.7: Figure 3.8: Figure 3.9: Figure 3.10: Figure 3.11: Figure 3.12: Figure 3.13: Figure 3.14: Figure 3.15: Figure 3.16: Figure 3.17: Figure 3.18: (a) Raw magneto-optic image before filtering (b) Image after moving-average filtering using a 7 x 7 window ............................ 29 (a) Raw magneto-optic image (b) Resulting image after geometric mean filtering ................................................................ 29 Resulting image after (a) harmonic mean filtering (b) median filtering ................................................................................. 30 (a) Raw magneto-optic image (b) Resulting image after geometric mean filtering ................................................................ 30 Resulting images after (a) harmonic mean filtering (b) median filtering ................................................................................. 31 (a) Raw magneto-optic image (b) Resulting image after geometric mean filtering .............................................................. 31 Resulting images after (a) harmonic mean filtering (b) median filtering ............................................................................... 32 Overview of the stages comprising the Multistage Filtering Algorithm .............................................................................. 34 Plot of Tanh(.) transformation and histogram equalization ................. 37 Typical magneto-optic image. The black square identifies a strictly background region ................................................................. 38 (a) Image after histogram equalization and thresholding (b) Image after Tanh(.) function and thresholding ............................... 38 (a) Illustration of the Threshold-Averaging Filter with a 3 x 3 window on a synthesized image (b) Close-up of the 3 x 3 window ............................................................................. 40 (a) Raw magneto-optic image before filtering (b) Image after NLM and Threshold-Averaging Filtering ................... 42 (a) Raw magneto-optic image before filtering (b) Image after NLM and Threshold-Averaging Filter (c) Raw magneto-optic image before filtering (d) Corresponding image after NLM and Threshold-Averaging Filter .............................. 43 ix Fi n Figure 3.19: Figure 3.20: Figure 3.21: Figure 3.22: Figure 3.23: Figure 3.24: Figure 3.25: Figure 3.26: Figure 3.27: Figure 3.28: Figure 3.29: Figure 3.30: Figure 4.1: Figure 4.2: Threshold-averaging mask illustrating the cause of isolated high frequency pixels. White blocks indicate pixels satisfying equation 3-6 while gray blocks do not ...................... 44 Resulting images with the threshold values of: (a) 0.15 (b) 0.20 (c) 0.25 (d) 0.30 (e) 0.35 and (f) 0.40 ............................... 45 (a) Raw magneto-optic image (b) Output after Threshold-Averaging Stage ................................................................. 48 (a) Morphological top-hat filtering mask (b) Output after Morphological Top-Hat Filtering Operation ............................... 48 Final result after Threshold-Averaging Filter with Edge Preservation and Post-Processing stages .................................... 49 (a) and (c) are the raw magneto-optic images (b) and (d) are the respective filtered images after Threshold- Averaging Filter with Edge Preservation and Post- Processing Stages ................................................................................. 50 (a) Raw magneto-optic image (b) Filtered results after Threshold-Averaging Filter with Edge Preservation and Post-Processing .................................................................................... 5 1 The Edge Removal Algorithm beginning with conditional dilation and ending with an image without the border region [15] ............................................................................................ 53 (a) Raw magneto-optic image (b) Image after NLM and Threshold-Averaging .................................................................... 54 (a) Image after segmentation (b) the conditional dilated image used for border removal ................................................ 55 Final filtered image after Border Removal ........................................... 55 (a) Raw magneto-optic image (b) Image after border removal stage ....................................................................................... 56 (a) Raw magneto-optic image (b) Resulting image after Threshold-Averaging and post-processing stages ......................... 62 (a) and (c) Raw magneto-optic image (b) and (d) Resulting image after NLM ................................................................... 64 Figure 4.3: (a) Synthetic image consisting of three objects (b) Plot of finite difference approximation of the first order derivative ............................................................................... 65 Figure 4.4: (a) Ramp edges with increasing slope (b) corresponding plot of finite difference approximation of the first order derivative ................................................................................................ 66 Figure 4.5: Synthetic magneto-optic image with a single rivet with the edges used for determining the average ........................................... 67 Figure 5.1: Program flow using the PC and Matlab work environment for filtering algorithm development .................................. 70 Figure 5.2: Flow chart of the Proof-of-Concept System ........................................... 73 Figure 5.3: Photograph of the Proof-of-Concept System ......................................... 74 Figure 5.4: Proof-of-Concept interface with raw image in the top window and the corresponding moving-average filtered image in the bottom window ...................................................... 75 Figure 5.5: Overview of prototype insertion into existing MOI system ................... 78 Figure 5.6: General FPGA architecture [38] ............................................................. 80 Figure 5.7: Overview of the prototype system flow using a FPGA ......................... 82 Figure 5.8: Architectural layout of a single quadrant of the Spartan IIE FPGA [27] ........................................................................................ 83 Figure 5.9: Block layout of an application-specific specialized board for the prototype system .............................................................. 84 Figure 5.10: Overview of the prototype system flow using a DSP .......................... 86 xi Chapter 1: Introduction 1.1 Nondestructive Evaluation Techniques Nondestructive evaluation provides a noninvasive technique for analyzing the integrity of materials. NDE technologies can determine the location, size, shape and type of the detected abnormality or defect in a material [1]. This is vital because such anomalies, if unmonitored, could cause catastrophic levels of damage and destruction, and nowhere is this truer than in the aviation industry. With older fleets becoming the norm, the aviation industry must rely heavily on the results from NDE to ensure the safety of the crew and passengers. There are a number of NDE techniques used in the field for abnormality and defect detection including: 0 Visual Inspection 0 Ultrasonic 0 Eddy Current Testing These are three of the most widely used NDE techniques and a brief description of each modality is given in the following sections. 1.2 Visual Inspection Visual inspection is the oldest and most basic form of nondestructive elevation. It is used in all industries as a first step in evaluating the integrity of a material. The inspector studies the material looking for signs of degradation. Visual inspections may be aided by the use of microscopes, CCD cameras, and endoscopes for closer internal inspections [1]. Visual inspections, while sometimes crude, allow for a quick inspection of crucial materials and components. Visual inspection is limited to surface layers and cannot be used to detect subsurface defects or abnormalities. This constraint lead to the development of more sophisticated inspection techniques, employing various sources of energy, such as acoustic, electromagnetic and x-rays. 1.3 Ultrasonic The ultrasonic technique consists of the transmission and reflection of a high frequency, ultrasonic pulse [2]. The pulse travels through the test material until it encounters a discontinuity or defect and is reflected back to the receiver. The propagation velocity and the time interval between the initial pulse and the echo, referred to as time of flight (T OF), can be used to calculate the thickness of the test material or map the location of defects in the material [3]. Figure 1.1 illustrates the basic concepts of the ultrasonic technique. TOP Defect Surface ' ' Probe Reflection Bottom / / Reflection l, Test Material Oscilloscope Display (a) (b) Figure 1.1: (a) Sample material undergoing ultrasonic testing (b) Resulting oscilloscope display of the reflected waves [3] ex c0 tes prc Lai man SCC( net i The middle reflection seen in Figure 1.1(b) is from the defect. Using the propagation speed and the TOF displayed on the oscilloscope, the location of the defect can be calculated, allowing for the repair or replacement of faulty materials. The second, larger reflection is from the bottom surface of the test material, which allows the thickness of the material to be determined. Ultrasonic testing is used to detect internal defects and is widely used in many industries for inspecting materials including metal, rubber, plastics, and welds or joints connecting two materials. However, a major drawback of this technique is the need for a coupling medium to couple the ultrasonic energy into the material. The test sample is either immersed in water or a coupling gel is used between the sample and transducer. In contrast, eddy current nondestructive testing (NDT) is a non-contact inspection technique and is described below. 1.4 Eddy Current Testing Eddy current testing is one of the most widely used forms of nondestructive evaluation. It is used on electrically conducting materials and can detect cracks and corrosion in multi-layer airframes, subsurface abnormalities and other variations in the test material. In eddy current testing, an alternating current passes through a coil producing a time-varying primary magnetic field in accordance with Maxwell-Ampere’s Law. When the coil is brought close to the a conducting material, the time-varying magnetic field induces eddy currents in the test material and results in the generation of a secondary magnetic field opposite in direction to the primary field [3]. This reduces the net flux linkage and hence the inductance of the coil. The generation of eddy currents in; CO} 165 also increases the resistance of the coil. This is shown in Figure 1.3 by points A and C [36]. Figure 1.2 shows the induction of eddy currents in a test material. K§/‘ ¢ <——— Primary Magnetic Field Secondary Magnetic Field \i /Induced Eddy Currents Test Material Figure 1.2: Eddy current induction in a test material [3] The magnitude and direction of the induced eddy current changes in the presence of material abnormalities, resulting in a net change in the terminal impedance of the probe as indicated by B in Figure 1.3. This impedance change as the probe scans a test sample constitutes an eddy current signal. By analyzing this change in impedance defects in the test sample can be detected. A A - coil in air B - coil over a defect Imaginary A C - coil over a defect-free region Real Figure 1.3: Impedance-plane plot of a coil over a conducting nonferromagnetic test sample [36] In the case of an infinitely large time-varying excitation current sheet, the eddy currents in a planar sample decay exponentially from the surface of the conductor according to the equation [4]: J =Jo exp(-y/5) (1- 1) where y is the distance below the surface of the conductor, 8 represents the skin depth or depth of penetration of the conductor and Jo is the current density at the conductor’s surface. The skin depth is the distance where the density of the eddy current is l/e of the surface eddy current density value. The skin depth is defined as [2]: 6: (1'2) where f is the frequency, # is the material permeability, and dis conductivity. A lower frequency results in a larger depth of penetration of the eddy current in the sample. By var the req An by ev: Th the [41 ev; ed. Ex ter Mi Ph varying the frequency of the current, defects in the subsurface layers can be detected. In the aviation industry a sliding probe is tediously applied to the entire Skin of the aircraft requiring manual analysis of the changes in the impedance plane trajectory on a scope. An alternate inspection method offering a fast and easy signal interpretation is provided by the Magneto-Optic/Eddy Current Imager (MOI). 1.5 Magneto-Optic/Eddy Current Imager (M01) The Magneto-Optic/Eddy Current Imager is a relatively new nondestructive evaluation tool designed for the detection of cracks and corrosion on aircraft surfaces. This technique uses a combination of eddy current excitation and magneto-optic imaging that yield real-time images of local magnetic fields associated with the induced currents [4]. The MOI was developed as an alternative to eddy current probes for nondestructive evaluation in the aviation industry [33]. The benefits of the M01 as compared to standard eddy current testing are many. According to the Eddy Current Inspection Reliability Experiment (ECIRE), the M01 was comparable to standard eddy current inspection in terms of probability of detection (POD) and detection reliability [5]. The benefits of the M01 as compared to standard eddy current inspection, according to the ECIRE study and Physical Research Inc, include [5] [6] [33]: 0 Decreased average number of false calls from 2.7 to 1.0 5 -10 times faster inspections resulting in decreased aircraft downtime Eliminated the need to strip paint and decals from the test surfaces Economic value in terms of decreased downtime and inspection man-hours Reduction in operator fatigue Th an sur int aPl Mt ICI inc ed 116 fil 0 Easy result documentation in the form of VHS videotape The M01 is advantageous over standard eddy current testing in almost every category and offers a quick, reliable system for the detection of defects on the surface of aircrafts. 1.6 Problem Statement The MOI produces magneto-optic images that are used by the inspector to detect surface and subsurface abnormalities in the test sample. These images while easy to interpret are corrupted by noise that degrades the overall image quality. The noise appears as snake-like lines and is caused by the magnetic domain boundaries in the MOI’S garnet sensor. The primary goal of this thesis is to develop filtering algorithms to remove this unwanted serpentine noise present in the magneto-optic images. The benefits of removing this unwanted noise include the detection of smaller defects and increased probability of detection (POD) of a flaw. There are a number of requirements pertaining to the image enhancement algorithms including simplicity, noise removal, edge preservation, and speed of processing. As the inspections are preformed on~line, near real-time processing is a necessary condition to ensure that the inspector sees the filtered image instantaneously. 1.7 Thesis Organization The thesis is organized into five chapters. Chapter 2 gives an overview of the operating principles of the Magneto-Optic/Eddy Current Imager (MOI) along with a description of the unwanted noise. Chapter 3 discusses the desired specifications for the filtering algorithm for restoring the magneto-optic images. The various candidate filtering algorithms are investigated and their respective results are presented in the chapter. Chapter 4 defines image quality metrics used in the evaluation and comparison of various filtering results. The different stages of hardware and software implementation are discussed in Chapter 5, along with a design for a near real-time prototype system. The final chapter, Chapter 6, gives a brief summary of the results and possible areas of future work. Chapter 2: Theory and Functionality of the Magneto-Optic/Eddy Current Imager 2.1 Introduction The Magneto-Optic/Eddy Current Imager (M01) is a system used for the non- destructive inspection of aircraft surfaces. The MOI provides a quick, non-invasive approach for the detection of cracks and corrosion that can be disastrous to an aircraft. The MOI system is used to find and locate cracks, defects and corrosion that can occur at varying depths in the multi-layer structures underlying an airframe. The principles of this technique comprise a combination of eddy current excitation and magneto-optic imaging that yields real-time images of local magnetic fields associated with the induced currents [4]. The theory of magneto-optic imaging along with the principles behind the M01 system are presented in the following sections. 2.2 Excitation of Linear Eddy Currents The excitation source in the M01 is provided by an induction foil carrying alternating current. A planar copper foil is used in conjunction with a Step-down transformer to induce eddy currents in the test material [7][6]. Faraday’s Law of Induction provides the basis for the eddy current phenomenon and is given by the equation: VxE=—aB/a: (2-1) where B is the time-varying magnetic field and E is the time-varying electric field. This equation illustrates the relationship that a time-varying magnetic field B, in close proximity to a electrical conductor with conductivity 0, will induce a time-varying electric field E and a current density J =oE. By Lenz’s Law, the eddy currents that are formed are always opposite in direction to the currents that were used to produce the magnetic field B. In the case of a planar test sample, the eddy currents decay exponentially from the surface of the conductor according to the equation [4]: =10 exp(-y/8) (2-2) where y is the distance below the surface of the conductor, 8 represents the Skin depth of the conductor and Jo is the current density at the conductor’s surface. The skin depth is the distance where the density of the eddy current is He of the surface eddy current density value. The skin depth is defined as [2]: 6 = (2-3) where f is the frequency, ,u is the material permeability, and dis conductivity. It is important to note, that the value of 6 cannot exceed the thickness of the test piece or uniform sheet currents will not be achieved [6]. This is due to Lorentz forces in the foil that cause “current bunching” which leads to irregular current distribution when 5 exceeds the thickness of the test piece [6]. To ensure that the sensor does not detect the magnetic fields associated with the source current, the magnetic fields associated with sheet currents must lie parallel to the sensor’s hard axis of magnetization [4]. This means that in a region free of defects, the magnetic fields associating with the eddy currents will have no effect on the magnetization of the sensor. However, in the presence of defects, rivets, and corrosion, the bending of eddy current flow results in the generation of normal B fields that will be 10 detected by the sensor as Shown in Figure 2.1. If this was not the case, the large magnetic fields from the eddy current induction would overwhelm the weaker fields associated with perturbations and rivets. Bp / ' V V a I / l '\ I s ' r I 33 g ' 19L I E i; .,.I B Eddy Current (a) (b) Figure 2.1: (a) Eddy current induction on a defect free region (b) Eddy current induction and corresponding B field components for sample with a rivet 2.3 Excitation of Rotating Eddy Currents Rotating or multidirectional eddy currents can be excited on a test material using an alternate configuration with two separate primary coils connected to a single foil by two single-tum secondary windings. To produce a rotating eddy current, the currents induced by the two sources must be perpendicular in direction and the two primary coils must be excited by sinusoidal waves ninety degrees out phase [6]. The current density in the test material can be represented by the equation: J = J0 sin( wt + G) (2-4) The two sources produce currents ninety degrees out of phase. The relationship between the two will only depend on the angle between them and the corresponding current density can be written as: 11 J = J0 sin( an)? + J0 cos(a)t)] (2-5) where {and j are unit vectors along the x and y coordinate axes. From the above equation, J can be viewed as a linear current density rotating with an angular frequency (1) [6H4]- Rotating eddy current excitation has a number of benefits compared to the traditional linear eddy current excitation. When linear eddy current excitation is used, the image of the rivets appears as two dark semi-circles with a rectangular region in the middle, which is the null zone. Cracks in this zone will be missed by the M01. Rotating eddy current excitation eliminates this "slotted screw" phenomena as it is called in industry. Rotating eddy current excitation also allows cracks of arbitrary orientation to be detected without requiring the current to be perpendicular to the long axis of the crack as is the case with linear current excitation. Eliminating these effects will allow inspections to be automated and pattern recognition algorithms and techniques to be incorporated into the system. One of the most significant benefits of rotating eddy current excitation over linear current excitation is its ability to improve images that have second and third layer defects [6]. 2.4 The Faraday Magneto-Optic Effect The basis of magneto-optic/eddy current imaging relies on the Faraday magneto- optic effect, discovered by Michael Faraday in 1845. The Faraday effect states that when a plane of polarized light is transmitted through certain materials including glass, parallel to an applied magnetic field, the plane of polarization of the light will be rotated by an angle proportional to: 12 a z 0f (i6 - My «[1? “Mb (2-6) where l is the sensor thickness, 12 wave vector of incident light, and A7! is the local magnetization of the sensor along the easy axis of magnetization [8]. This rotation is known as the Faraday rotation. The transmission mode geometry of the Faraday magneto-optic effect is illustrated in the Figure 2.2. Polarizer Analyzer Light ”J _, /, M Figure 2.2: Transmission mode geometry of the Faraday magneto-optic effect [12] Note from the previous equation that the Faraday rotation does not depend on the direction the light travels through the sensor. Transmitting the light and then reflecting it back through the sensor effectively doubles the Faraday rotation. This effect is known as reflection mode. geometry and is exploited by the M01 instrument. Viewing the reflected light in an analyzer, a region’s local state of magnetization is represented by dark or light areas depending on the direction of magnetization [6]. 2.5 Magneto-Optic Sensors The M01 '3 magneto-optic sensors are composed of a bismuth-doped iron garnet grown on a 3-inch in diameter, 0.0200-inch thick substrate of gadolinium gallium garnet. Iron garnet sensors are relatively small in size and offer high sensitivity, speed, and 13 Simplicity [9]. Films of this nature are used in the sensor because they possess a number of properties that are extremely vital for magneto-optic/eddy current imaging. The first vital property is magnetic anisotropy in that they have an easy and hard axis of magnetization. The easy axis of magnetization is normal to the surface of the sensor and the hard axis of magnetization is parallel to the surface. Sensors possessing this property only detect the normal component of magnetic field disturbances. This property is important for the formation of M01 image. The second critical property is memory. A sensor with memory will preserve most of its established magnetization even when the fields along the easy axis are removed. This quality is important for retaining the image. The third property is a large specific Faraday rotation, 6f. The Bismuth doping causes the films to have a Faraday rotation between +/- 20,000 and +/- 40,000 degrees/cm of thickness, allowing for higher contrast magneto-optic/eddy current images [6]. 2.6 Domain Walls Serpentine domain walls are a necessary component for image formation. The domains in the magneto-optic sensor expand or contract when a magnetic field is applied parallel to the easy axis of the sensor. The direction of the magnetic field determines whether the domain grows or contracts. Figure 2.3 illustrates the expansion of domain walls in the presence of a magnetic field. When the magnetic field is removed, the domain walls stay constant or “freeze” because of small nucleation centers found in the sensor. This property of not changing Shape even after the removal of the magnetic field provides memory and allows imaging of time-varying magnetic fields over a sweep of frequencies [4]. 14 Figure 2.3: Expansion of the domains in the presence of a magnetic field [4] 2.7 Image Formation Using principles of eddy current excitation and Faraday’s effect, the magneto-optic image can be explained according to Figure 2.4. ///é —» if.) \ ./// —» 1 Figure 2.4: Imaging of a single rivet using eddy current induction [12] 0 15 The magnetic fields can be computed using the induced eddy current direction and the right-hand rule. Only the magnetic fields normal and out of the surface are detected by the sensor. Notice in Figure 2.4, that only half the rivet is imaged in each half cycle of the linear current excitation. The two sub-images are then combined in each cycle of the alternating current to form the final image containing the entire rivet. Figure 2.5 shows a typical magneto-optic image with rotational excitation of a sample containing four rivets and is consistent with what the M01 operator would see on the headset or video monitor. The rivets on the left half of the figure appear as washers or doughnuts with a constant radius. These characteristics classify these two rivets as non—defective. The rivets on the right half of the image have a washer body but have two dark spots leading from the top and bottom edges of the ring, giving the rivet a wing-nut type appearance. The dark spots coming from the top and the bottom of each rivet indicate a radially outward crack at the rivet. Figure 2.5: Relatively clean MOI image showing four rivets and domain structures 16 Also visible in the image, are the domain structures. These serpentine shaped domain structures are caused by the magnetic domain boundaries in the sensor itself [10]. These domain structures depend on the sample properties and can potentially mask defect indications, especially in the third layer, and interfere with the picture quality. 2.8 Principles and Operation of the M01 System The Magneto-Optic/Eddy Current Image is currently being used in both the government and civilian sectors. The MOI consists of a handheld sensor that the operator moves over the aircraft surface. The sensor is connected to a power unit that allows for the selection of frequency, excitation modes and the corresponding levels [11]. Magneto- optic principles generate real-time images that are broadcast to a head—mounted unit and simultaneously to a video monitor where it is reviewed, analyzed and stored. A typical MOI system is shown below: Figure 2.6: Photograph of the PR1 MOI instrument including handheld sensor (left), excitation unit (center), and wearable monitor (right) The schematic of the MOI instrument is shown in Figure 2.7. UGHT SOURCE POUR? ////g //l BIAS COL Figure 2.7: Basic schematic of the MOI instrument [6] An alternating current carrying induction foil induces eddy currents in the sample, which produces a normal magnetic field in the vicinity of defects. Light is transmitted from the source through a polarizer, into the sensor placed just above the foil and then back to the analyzer. This configuration takes advantage of the reflection mode geometry of Faraday’s effect. The image of local magnetization is then formed in the analyzer and passed to operator for viewing. In practice, the formation of the magneto-optic/eddy current images is a multi- step process. The initial step of the M01 operation consists of an erase pulse that clears the previous image from the sensor. During the erase pulse, the analyzer cycles through a series of adjustments to make the background appear uniformly bright. After the erase 18 pulse, a steady current in the bias coil produces the magnetic bias field, and concurrently the sheet current in the foil is activated. While these are both active, the magneto- optic/eddy current images are formed. Rivets, corrosion, cracks, and Similar structures and abnormalities, produce magnetic fields normal to surface. These fields are parallel to the easy axis of magnetization and are therefore visible to the sensor, and in turn, visible in the image. In the final step, the sheet current is eliminated. Even with the sheet current off, the image still is visible. As discussed earlier, this is due to the memory property in the magneto-optic sensor. The image is retained until the start of the next erase pulse. This process occurs twenty-six times per second yielding real-time images of the test surface [4]. The overall quality of the images is not affected by surfaces covered with paint or decals, which makes the M01 advantageous over traditional eddy current or ultrasonic techniques [4][6]. 2.9 Conclusion The MOI provides a quick and reliable system to detect defects located at varying depths in the skin of an aircraft. The system produces analog images by the use of eddy current excitation, a garnet sensor, and the analyzer. While the system is effective, it is not without its problems. One of the drawbacks is that the images produced by the M01 contain high frequency serpentine structures associated with the magnetic domains in the sensor. This noise is detrimental to the quality of the image and interferes with the detection of defects. The noise also limits the use of pattern recognition algorithms and hampers the possible automation of M01 data analysis. This research investigates a 19 number of image processing algorithms for reducing or even eliminating the serpentine noise. The development of these image processing algorithms is discussed in the following chapter. 20 Chapter 3: Image Processing Techniques For Magneto-Optic Images 3.1 Introduction The restoration of magneto-optic images presents a very significant challenge. The serpentine Shaped structures caused by the magnetic domain boundaries in the sensor significantly degrade the magneto-optic images. The development of filtering algorithms to remove this unwanted noise and implementation of these algorithms in near real-time, would aid operators in the detection of small defects as they scan the sample. This chapter describes a number of image processing algorithms suitable for this application. In the M01 project, the goal is to process the images on-line in near real-time, which is 30 frames/second for most applications. The filtering algorithms for the project must fulfill the following requirements: 0 Remove the unwanted serpentine domain structures 0 PreServe the edges of objects in the images 0 Complete the process in near real-time The removal of the unwanted snake-like domain structures is the primary goal. These structures overlay the entire image and make small cracks and corrosion very difficult to see to the untrained eye. The removal of the unwanted noise would speed up inspections and at the same time, allow smaller defects to be detected. The overall sensitivity and probability of detection (POD) of a flaw would be increased. The second requirement, which is the preservation of object edges, is very important for overall picture quality and image integrity. An image with blurred edges appears to be out-of-focus, which is undesirable. Typically, aircraft skin has thousands of 21 rivets, making the circular impression of the rivet a prevalent feature in the magneto-optic images. Many filtering algorithms blur the high frequency edges, resulting in rivets without crisp well-defined boundaries. A defective rivet containing a crack will have a small derivation from the circular boundary on the side where the crack is located as shown in Figure 3.1. If the edges are not sharply defined, there is a chance that this abnormality could go undetected. The preservation of the edge allows for this critical feature to be retained and improves the overall sharpness and hence the probability of detection (POD) of a crack. Crack Figure 3.1: Typical magneto-optic image. The rivet on the left is defective with a crack located on its left side. The rivet on the rivet contains no abnormalities The third requirement involves the speed of applying the filtering algorithm to a given image. An operator scanning the surface of an aircraft has to be able to visualize the enhanced image in real-time in order to identify the rivets with cracks. The image processing algorithm and its associated hardware must be integrated in the M01 system so that is portable allowing the processed image to be displayed on the head mounted 22 monitor. In order to accomplish real-time or near real-time processing, the filtering algorithms must be extremely simple and fast. Specialized image processing hardware exists that can implement almost any function at a high processing speed. While these Specialized boards can help meet the speed requirement, they are extremely expensive. The use of such boards, in conjunction with the M01, will cause the cost of the M01 to rise substantially. The goal of the project is to implement the solution with the least amount of change to the existing M01 in terms of both physical packaging and financial costs. The use of a fast filtering algorithm will allow cheaper, less specialized boards to be used, making the product more cost efficient. The speed of the overall implementation will rely more on the chosen filtering algorithm than on the specialized hardware. The hardware implementation is presented in depth in Chapter 5. A number of spatial filtering algorithms were developed, investigated and tested with varying degrees of success. In this chapter, the techniques that showed the most potential at fulfilling all three criteria are presented. 3.2.1 Moving-Average Filtering The moving-average filter was the first algorithm to be implemented and tested. The moving average filter replaces a given pixel, f (x, y), by the average of the pixels in a certain window, centered at (x,y) and can be expressed as: g(x.y) =-lI; Zf(n.m) (3-1) (n,m)€S 23 where S is the set of points in the neighborhood of (x,y) including (x,y) and P is the total number of points in the neighborhood. The averaging filter can also be viewed in terms of convolution with the filtering mask, as Shown below: %}6% V9}6}6 }6%% Figure 3.2: Filtering mask for a 3 x 3 moving average filter with equal weight coefficients In this example using a 3 x 3 window or mask, the center pixel is replaced by the average of the intensity values of the nine surrounding pixels. The idea underlying the moving- average algorithm is that the unwanted serpentine domains would be eliminated by replacing the grayscale pixel value of the domain walls by the average value of the pixels in the window. The size of the window can be varied. In general, a larger window size will result in a smoother image. The moving-average filter was selected for implementation because of its inherent simplicity, ability to reduce noise in the image, and speed of implementation. The results of the moving-average filter are shown in the section 3.2.5. The idea of spatial averaging can be extended to other concepts of calculating mean values as described below. 24‘ 3.2.2 Geometric Mean Filter The geometric mean filter accomplishes noise reduction by use of the expression: 1 g(x,y) =l firearm" (3-2) (s,t)6 S xy where the filtering window is of size m x n and f(x,y) is the original image in the neighborhood defined by S xy . Each resulting pixel, g(x,y), is geometric mean of the pixel intensities in the neighborhood S xy° The results are comparable to that of the arithmetic mean filter with the added benefit of increased edge preservation. 3.2.3 Harmonic Mean Filter The harmonic mean filter, in general, is used to eliminate salt-type noise from an image. The equation for the Harmonic Filter is given by: ' mn 806. y) = (3-3) I 2 (5,0651): f(S,t) where the filtering window is of Size m x n and f(x,y) is the original image in the neighborhood defined by S xy . The restored pixel, g(x,y), is replaced by the harmonic mean of the pixels in the neighborhood S xy. The serpentine noise has an impulse-type quality to it which the harmonic mean filter should elinrinate. 25 3.2.4 Median Filter A median filter was also implemented and tested on the unwanted serpentine structures. The median filter is a statistical filter given by the equation: g(x,)’)= median {f(S,t)} (3-4) (s,t)ESxy where f(x,y) are the pixel values of the original image contained in the neighborhood defined by S xy- The resulting pixel value, g(x,y), is equal to the median of the pixel values in the given neighborhood thereby eliminating isolated pixel noise. The goal of all the spatial filters described here is to reduce the domain specific noise in the magneto-optic images. Results for each filter are shown in the following section. 3.2.5 Results of Spatial Filtering Below are the results using the spatial filters on three typical magneto-optic images. The images were captured from a digital video files (AVI ) and a VHS videotape, both provided by PR1 Research and Development Corporation. The images are typical of what operators encounter on a daily basis. The first image, Figure 3.3, contains a seam with two rivets below it. In the left half of the seam, is a type of defect known as a waffle crack. In a single frame, the defect has to be viewed through the domain structures. However, when the sensor is in motion, the domain walls are stationary while the structures associated with the sample appear to be in motion. In a sequence of frames, the defect and sample structures will move in the opposite direction of the sensor while the domains are static relative to the sensor. The defect stands out 26 from the static background and is easily detected by the human eye, making the detection of defects in a sequence of frames easier. (a) (b) Figure 3.3: (a) Raw magneto-optic image before filtering (b) Image after moving- average filtering using a 7 x 7 window In this thesis, the processing is done on individual frames that are assumed to be static. In all the moving—average filtering results, while the high frequency domain structures have been eliminated, the overall picture quality has been degraded, particularly in the region of edges. The averaging has also degraded the contrast between the waffle crack and the seam. The overall image appears fuzzy with little contrast between background pixels and object pixels. The next image contains two rivets and a seam. The rivet on the left contains a crack, which can be seen as the growth coming from the left side of the leftmost rivet. 27 (a) (b) Figure 3.4: (a) Raw magneto-optic image before filtering (note that the domain structures in this sample are minimal (b) Image after moving-average filtering using a 7 x 7 window The removal of the serpentine structures is very successful in the resulting filtered image. While the defect on the left rivet is still noticeable, excessive blurring has occurred. The third result contains a single rivet, free of any defects. The quality of the filtered image is good in terms of noise elimination, but the moving-average filter again has caused excessive blurring. 28 (a) (b) Figure 3.5: (a) Raw magneto-optic image before filtering (b) Image after moving- average filtering using a 7 x 7 window The filtering was repeated using the geometric mean, harmonic mean and median filters and are the results shown in the following figures. Figure 3.6: (a) Raw magneto-optic image (b) Resulting image after geometric mean filtering 29 (a) (b) Figure 3.7 : Resulting image after (a) harmonic mean filtering (b) median filtering (a) (b) Figure 3.8: (a) Raw magneto—optic image (b) Resulting image after geometric mean filtering 30 (a) (b) Figure 3.9: Resulting images after (a) harmonic mean filtering (b) median filtering (a) (b) Figure 3.10: (a) Raw magneto-optic image (b) Resulting image after geometric mean filtering 31 (a) (b) Figure 3.11: Resulting images after (a) harmonic mean filtering (b) median filtering The results from the arithmetic, geometric and harmonic mean filters were comparable. The median filter was not as successful in removing the serpentine structures in the three images. While the elimination of the unwanted noise was realized in all the filtered images, a repercussion of this technique was excessive edge blurring. This consequence violated the second image restoration requirement, which dealt with edge preservation. To fulfill the edge preservation requirement, 3 more sophisticated, multistage algorithm was investigated next. 3.3 Multistage Filtering Algorithm The Multistage Filtering Algorithm consists of four stages for enhancing magneto-optic images: 1. Contrast Enhancement 2. Threshold-Averaging 32 3. Post-Processing 4. Border Removal The contrast enhancement Stage uses a nonlinear gray level mapping function (NLM) to enhance the contrast in the image and normalize the data from the different sources. The Threshold-Averaging second stage, removes the unwanted serpentine noise from the image while preserving the edge sharpness. The post-processing stage is then used to remove any residual noise left in the image after the previous stage. The border removing fourth stage is an optional step that removes the characteristic dark borders from the magneto-optic images. The use of these four stages in a filtering algorithm produces results that satisfy the image enhancement requirements pertaining to noise removal, edge preservation, and near real-time processing. Each stage is presented in more detail in the following sections. Figure 3.12 gives an overview of the Multistage Filtering Algorithm. 33 Input: Raw Image Nonlinear Gray Level Mapping (NLM) Threshold Averaging Filter with Edge- l———§ Preservation Multiple Iterations i Morphological Top-Hat Filter l Moving-Average using Top-Hat Filtering Mask Post Processing —“ l I Median Filtering I (Optional) Border Removal —’ Output: Filtered Image Figure 3.12: Overview of the stages comprising the Multistage Filtering Algorithm 34 3.4.1 Contrast Enhancement using a Nonlinear Gray Level Mapping Algorithm (NLM) The first stage of the multistage algorithm deals with contrast enhancement for stretching the intensity levels in the magneto-optic images. For the magneto-optic images, the tanh (.) function is used to map the grayscale pixel values of the image to the interval [0, 1]. The use of this nonlinear gray level mapping function enhances the contrast in the image and allows for the development of segmentation algorithms independent of the data source. In atypical grayscale image, the pixel intensity values can range from 0 to 255, assuming 256 gray levels. In the case of magneto-optic images, the pixel intensity values range from 0 to a maximum value, Irnax (Imax < 225). This maximum value varies depending on the instrument settings as well as material and objects in the sensor plane. The tanh(.) function can map this varying range of pixel values to an interval between 0 and 1, according to the equation: 80% y) = tanh(a * f (x, y)) (3-5) where Otis an appropriate scaling factor. In this application, the optimal value of or. was seen to be equal to the reciprocal of the maximum grayscale value in the original image. The maximum pixel intensity can vary from frame to frame, so keeping the scaling factor constant does not yield the most optimal solution. 3.4.2 Comparison with Histogram Equalization Histogram Equalization is commonly used to transform intensity values in contrast enhancement. The tanh(.) function was used because of its simple, one-step mathematical implementation and the quality of its corresponding results compared to histogram equalization. The tanh(.) function yielded similar results to histogram 35 equalization and in a few instances, was superior to histogram equalization. The reason lies in the difference between the tanh(.) function and the transformation function used for histogram equalization. Histogram equalization is based on the cumulative distribution function. For continuous values the histogram equalization is given by the equation [14]: x s = H(x) = j px(w)dw (3-6) 0 where w is dummy variable for integration purposes and x is the random variable. Applying this idea to discrete values, which comprise a typical image, the relationship for histogram equalization becomes [14]: k k n . sk =H(xk)= Z px(xj)= Z J- k=0, 1,2, ...,M-1 (3-7) i=0 i=0 ” where n is total number of pixels, n j is the number of pixels that have intensity value x j , and M is the total number of intensity values. For the image in Figure 3.14, the histogram equalization and tanh(.) function are plotted in Figure 3.13. 36 .0 co .C’ 0') Threshold value of 0.50 .0 4:. Mapped Ouput Intensity Value 0.2 Histogram Equalization .t“ 0 JJ-l-‘- 1 g I 1 0 50 100 150 200 250 Input Intensity Value Figure 3.13: Plot of Tanh(.) transformation and histogram equalization The differences between histogram equalization transformation and mapping using the tanh(.) function are important. Using the tanh(.) transformation before thresholding removes more of the background noise than using histogram equalization before thresholding. This is due to the transformed pixel values of the noisy background regions after application of the tanh(.) transformation and histogram equalization. This is illustrated with the help of a typical magneto-optic image as shown in Figure 3.14. The black square in the lower left hand comer represents a region of strictly background noise. This region has pixel values varying from 150 - 195 with a mean of 170.75 and standard deviation of 2.3765. 37 Background Noise 50 100 i 150 200 250 300 Figure 3.14: Typical magneto-optic image. The black square identifies a strictly background region The Nonlinear Gray Level Mapping algorithm and histogram equalization were each used to transform the image’s original pixel values to a range on the interval [0,1] and the corresponding results were thresholded using a value of 0.5 in this case. The results of these two steps are shown in Figure 3.15. (a) (b) Figure 3.15 : (a) Image after histogram equalization and thresholding (b) Image after Tanh(.) function and thresholding 38 Notice that the resulting image from the tanh(.) function has less noise than the resulting image using histogram equalization. This is due to the actual transformation of each pixel value as shown in Figure 3.13. The histogram equalization mapped the pixel values in the background region to an interval [0.32, 0.60] with a mean value of 0.4591 (and standard deviation of 0.0153), which is below the threshold value. The tanh(.) function mapped this background region to an interval [0.54, 0.64] with a mean of 0.5838 (and standard deviation of 0.0064), well above the threshold value. With thresholding, more noise was removed from the image where the pixel values had been transformed using the tanh(.) function which resulted in a cleaner filtered image. 3.5.1 Threshold-Averaging Filter The next stage of the multistage filtering is Threshold-Averaging Filter, which is used for the removal of the unwanted serpentine noise without blurring high frequency edges. The Threshold-Averaging Filter takes into account knowledge of background pixels and object pixels in the averaging neighborhood. In the case of the magneto-optic images objects are the rivets, seams and other airframe structures. By discriminating between the background and object pixels used in the averaging, the edges of the objects are retained giving the edges in the filtered image a crisp, well-defined appearance similar to those in the original image. The filter discriminates between the object and background pixels by the use of a threshold value. The filter only averages the pixels that are close to the intensity of the center pixel and are in the window. The Threshold- Averaging Filter is represented by [13]: 39 1 n n g(x,y)=;Z Zflxmj) (3-8) i=0j=0 where f (xi , y j) satisfy the following conditions: ||f(x,-,yj)—f(x,y) ||sr and i,j=0,1,...,n (3-9) and P is the number of pixels that satisfy the above conditions, f(x,y) is the center pixel value and T is a selected threshold value. The first condition identifies pixels in the same object having similar pixel values as that of the center pixel. These pixels will be included in the calculation of the intensity value of pixel (x,y). The pixels that are not in the object will not be included in calculating the resulting intensity value of pixel (x,y) and hence will not contribute to blurring. This idea is illustrated simplistically in the figure shown below: E3 > (x,y) Object . Background Sample Image (a) (b) Figure 3.16: (a) Illustration of the Threshold-Averaging Filter with a 3 x 3 window on a synthesized image (b) Close-up of the 3 x 3 window 40 Assume that Figure 3.16 represents a sample image with a 3 x 3 window being used for Threshold-Averaging Filtering. Notice that the gray area on the left of the sample image represents one object while the white area on the right represents the background. The window is located on an edge pixel separating an object from the background region. The pixels in the first two columns of Figure 3.16(b) have the same intensity as the object, while the pixels in the third column have the intensity value of the background. Assuming that the difference between the background pixel values and the object pixel values is greater than the threshold value, the background pixels will not be included in computing the average for the center pixel (x,y) on the edge. This allows pixels along the edge of an object to retain their sharpness. In the case of the M01, this is evident around the edges of rivets, seams, and other airframe structures. 3.5.2 Results of Implementing Threshold-Averaging Filter Multiple iterations of the Threshold-Averaging Filter can be performed for further refining of the results. This is advantageous for eliminating variations in the objects and background regions respectively. The Optimal number of iterations is determined as a trade off between maximum noise removal and minimal processing time. The Threshold— Average Filter is time intensive because of its pixel-by-pixel filtering process. The number of iterations that allowed for optimum results in a justifiable processing time was three. Fewer iterations made the post-processing stages, discussed in the next section, slower, while more iterations did not improve the quality of the resulting image. The window size used in Threshold-Averaging was varied from 3 x 3 pixels to 7 x 7 pixels. The larger window size allowed for the possibility of more pixels to meet the 41 threshold requirement, making the object and background smoother when the threshold requirement was satisfied. Using the SNR metric, a 7 x 7 window was found to be the optimal choice. Image metrics and image quality pertaining to the magneto-optic images will be explored in more detail in Chapter 4. Below are the results of Threshold—Averaging filtering on the three images used in Figures 3.3(a) through 3.5(a). Notice on the first set of the results, that a majority of the noise was eliminated from the background above the seam and between the rivets. The edges of the rivets and the seam were retained. The contrast between the seam and the waffle crack is better than that in the original. (a) (b) Figure 3.17 : (a) Raw magneto-optic image before filtering (b) Image after NLM and Threshold-Averaging Filtering 42 (C) ((0 Figure 3.18: (a) Raw magneto—optic image before filtering (b) Image after NLM and Threshold—Averaging Filter (c) Raw magneto-optic image before filtering (d) Corresponding image after NLM and Threshold-Averaging Filter In the previous results, the serpentine noise was eliminated from the image while the object edges were retained. However, one visible problem remained in that isolated high frequency pixels were found around the object edges. This can be understood from the example shown in Figure 3.19. 43 .- pixel is not included in the averaging mask El - pixel is included in the averaging mask Figure 3.19: Threshold-averaging mask illustrating the cause of isolated high frequency pixels. White blocks indicate pixels satisfying equation 3-6 while gray blocks do not The center white pixel is replaced only by the average of the three other pixels in the mask that satisfy the threshold condition. This situation caused by the retention of residual isolated high frequency noise seen in the above results, particularly in Figure 3. 17(b). Adjusting the threshold value can eliminate this isolated noise. However, a larger threshold value will also decrease the algorithm’s edge preserving properties. Figure 3.20 shows the resulting image with the threshold value varied from 0.15 to 0.40. (d) (e) (0 Figure 3.20: Resulting images with the threshold values of: (a) 0.15 (b) 0.20 (c) 0.25 (d) 0.30 (e) 0.35 and (f) 0.40 To further reduce these unwanted residuals, post-processing filtering stages were introduced and are discussed in the next section. 3.6.1 Post-Processing Stages While the results from using the Threshold-Averaging Filter were satisfactory and noticeably better than the moving-average and other spatial filters in terms of edge preservation, there was still a need for a post processing stage to eliminate high intensity pixels around the object edges. The post-processing stages consisted of the following two steps: 45 1. Morphological top-hat filtering mask operation 2. Median Filtering The first stage, morphological top-hat filtering consists of subtracting a morphologically opened image from the original image. Opening consists of eroding an image and then dilating the eroded image. Erosion of a gray-scale image X by a structuring element B is defined as [14]: XGB(s,t) = min{X(s + x,t + y) — B(x, y) | (s + x),(t + y) E DX :(x, y) E DB } (3-10) where D X and DB are the domains of X and B, and the structuring element B must be within the set being eroded. The dilation of a gray-scale image X by B is defined as [14]: X GB B(s,t) = max{X(s —x,t — y) + B(x,y) | (s — x),(t — y)E DX 3(X,y)e 03} (3-11) where X is the gray-scale image, B is the Structuring element, and D X and D B are the domains of X and B. Opening is then formed from the dilation of an eroded image, which is represented mathematically as [14]: X oB=(XOB)GBB (3-12) Morphological top-hat filtering consists of subtracting a morphologically opened image from the original image, and can be expressed as [14]: Y=X.-(XoB)=X-((XGB)$B) (3-13) The resulting image, Y, is thresholded to form a binary mask that provides the exact location of the noise pixels. The white pixel values (with an intensity value of 1) of the mask corresponded to the areas where the high frequency elements remained after the Threshold-Averaging Filter. Pixels of high-intensity were found both scattered around the image and in close proximity to the object edges. The post-processing algorithm uses this binary mask to apply a moving average filter with a very small window size to 46 smooth out the noise, leaving an image with less noise and relatively Sharp edges. This relationship is illustrated in the equation below: If Y(x,y) = 1 then G(x, y) = -l- 2 X(n,m) (3-14) (n,m)€ S Else G(x,y) = X(XQ’) Where Y is the binary mask resulting from the morphological top-hat filtering, X is the image after Threshold-Averaging Filtering, S is the set of points in the neighborhood of (x,y) including (x,y) and P is the total number of points in the neighborhood. The second step of the post-processing stage consisted of applying a median filter to the image obtained from step one to remove any remaining noise pixels. The results from the Threshold-Averaging Filer with Edge Preservation and Post-Processing Stages are presented in the following section. 3.6.2 Results From Threshold-Averaging Filter and Post-Processing Stages The results from Threshold-Averaging Filter and the post-processing stages are Shown below. 47 (a) (b) Figure 3.21: (a) Raw magneto-optic image (b) Output after Threshold-Averaging Stage *IWJDWJzMLflme‘ (a) (b) Figure 3.22: (a) Morphological top-hat filtering mask (b) Output after Morphological Top-Hat Filtering Operation Shown in Figure 3.22, is the binary mask and the resulting image after a moving-average filter with a small window size was performed. Notice the salt-and-pepper noise 48 remaining in the image. This was removed by a median filter and the results are shown in the Figure 3.23. Figure 3.23: Final result after Threshold-Averaging Filter with Edge Preservation and Post-Processing stages Results on additional data are shown in the figure below. The raw images are the same as the ones used in the previous filtering sections. 49 (C) (‘0 Figure 3.24: (a) and (c) are the raw magneto-optic images (b) and (d) are the respective filtered images after Threshold—Averaging Filter with Edge Preservation and Post- Processing Stages 50 (a) (b) Figure 3.25: (a) Raw magneto-optic image (b) Filtered results after Threshold- Averaging Filter with Edge Preservation and Post-Processing The Threshold-Averaging Filter with Post-Processing Stages results have eliminated the noise and retained the edges better than the previous filtering algorithms. The post- processing stages are optional and depending on the level of noise maybe necessary. 3.7.1 Border Removal The fourth stage of the multistage algorithm is the border removal stage. This stage is optional and is required for images having a dark border around the outside of the image caused by the physical properties of the M01 sensor. The first stage of the border removal consists of thresholding the transformed image into a binary image. The threshold value, T, is chosen to separate the rivets, seams, cracks, and other objects from the background region. The thresholding operation is defined as follows: h(x,y) = {1 if g(x, )>T (3-15) {0 if g(x,y) S T 51 The threshold value, T, was determined using a simple histogram of the transformed image. The second stage of the algorithm involves the use of conditional dilatation to remove these unwanted dark borders. The Border Removal dilates a marker image with a structuring element [15]. The initial marker image was defined as: M0(x, y) = {0, 10 < x < max(x)-9, 10 < y < max(y)-9 (3-16) {1, else where max(x) and max(y) denotes the maximum coordinate values of x and y, respectively. The initial marker image contains a ten pixel wide border of ones, corresponding to white, around its edge. The remaining pixels are given the initial value of zero, corresponding to black. The conditional dilation operation is formed by retaining the intersection of the dilated marker image and the thresholded image and is represented by the equation: M, (x, y) = h(x, y) m (MP1 6 B)(x, y),l S t S 2' (3-17) where M t is the dilated marker image at iteration t , h(x, y) is the thresholded image, B is the structuring element and GB denotes the dilation of M with B. In this application, a flat, 3 x 3 structuring element was chosen as: (3-18) w ll 0 t— O H H Ul= o ~— 0 This process is repeated until the dilated image consists of the border elements we wish to remove. The process is terminated when M t - Mt_1 = 0 or 50 conditional dilation iterations have occurred. The additional termination constraint of 50 iterations is needed so that seams, and similar type objects, that stretch across the entire length of the image, 52 are not removed. These objects begin in the border region and therefore would be entirely removed before the dilated marker stopped growing, thus the need for a termination condition based on the number of iterations. Subtracting this dilated marker from the thresholded image removes the unwanted boundary elements: A(x. y) = h(x. y) - M r (x. y) (3-19) . The final step in this stage is to subtract this dilated image from the original to remove the unwanted border elements. The Border Removal is illustrated in the Figure 3.26 with a synthesized image. Conditional Dilation (equ. 3-17) MN“ ““33" Dilated Border l . Subtraction of the \* Dilated Border from Original Image 1 Original Image (Inverted) Output Image Figure 3.26: The Edge Removal Algorithm beginning with conditional dilation and ending with an image without the border region [15] 53 The final step of the Border Removal is to convert the image back to the color scheme the operators are familiar with; black rivets with a white background. It should be noted that Border Removal does not have to be used on all the magneto-optic images. Border Removal is undesirable for images that have features in this border region. 3.7.2 Results of the Border Removal Stage Below are typical results using the Border Removal algorithm consisting of the stages previously described in detail. The following figures show the results at each stage of the algorithm including the conditional dilated image used for border removal. (a) (b) Figure 3.27: (a) Raw magneto-optic image (b) Image after NLM and Threshold- Averaging 54 border removal . . u . . . "E“h". o . 'J '0 O l ‘ ‘ u; D I» c‘ ' a . .5 , mam ‘ ' Figure 3.29: Final filtered image after Border Removal 55 (b) Figure 3.28: (a) Image after segmentation (b) The conditional dilated image used for gzfif;wm.'r v;:T»‘r:'s;.(3;iqrr-..;: ‘-.',.;g.;;-"g5;ft’ -w.‘ (a) (b) Figure 3.30: (a) Raw magneto-optic image (b) Image after border removal stage 3.8 Conclusion Magneto-optic images offer a unique challenge in image processing. The algorithms developed were required to eliminate the unwanted serpentine noise, preserve the edges of objects in the image, and be simple and fast for implementation in near real- time. The Multistage Filtering Algorithm consisting of contrast enhancement, Threshold- Averaging, and post-processing produced a gray-scale image with minimal noise while retaining the sharpness of object edges. The optional fourth stage of the multistage algorithm was a border removing algorithm. This additional step is needed for images that contain a dark border region caused by the magnetic flux near the edge of the foil. This multistage algorithm fulfilled the image processing criteria and was the basis of a proof—of—concept hardware system and prototype system. Next, a quantitative measure was developed to assess the performance of the algorithms. The following chapter studies the use of metrics in quantifying the “goodness” of the images produced by the 56 Threshold-Averaging stage and the border removal stages of the multistage filtering algorithm. 57 Chapter 4: Image Quality Metrics 4.1 Introduction When developing filtering algorithms for image restoration, there is a need to evaluate the results of the different techniques in terms of image quality. There are no universal metrics for quantifying a filtered result. Two different criteria, objective and subjective, are commonly used in the engineering discipline for judging the “quality” of an image. The subjective criterion is based on the personal bias of human evaluators. The evaluator gives a filtered result a grade based on personal preference and opinion. The objective criterion is based on a mathematical criterion that tries to quantify certain characteristics of the resulting image. Both criterion classes are used routinely for image evaluation. In the case of magneto-optic images, the image metric has to quantify how well the filtering algorithm removed the unwanted serpentine noise and preserved the object edges. These were the two filtering requirements described in Chapter 3. While there is no standard way to qualify these results, the goal is to develop a criterion function to quantify these features for magneto-optic images. Furthermore, the metric should be easily calculated so that it could be incorporated as a feedback into the filtering software. The meshing of the metric and the algorithm would allow software to pick the best filtering parameters for the particular data set based on the metric. This would allow for the eventual implementation of an adaptive filtering algorithm that can optirrrize the filter parameters. 58 4.2 Subjective Fidelity Criteria The Simplest and most commonly used image evaluation method is the Subjective Fidelity Criteria (SFC). All filtering algorithm designers use SFC in its Simplest form. The human eye is the best judge of image quality. It can rapidly process, compare, contrast and evaluate numerous features and attributes of an image. The SFC incorporates this natural phenomenon and consists of giving human subjects a series of restored images and allowing them to assign a rating to each image independently [14]. The observer can also compare two images and rate one as better than another. A typical SFC scale is illustrated in Table 4.1. Table 4.1: Subjective Fidelity Criteria Scale Numeric Value Rating Description Restored image is of the highest quality with 4 Excellent all desired features retained Restored image is of above-average quality 3 Good with many of the desired features retained Restored image is of acceptable quality with 2 Average some of the desired features retained Restored image is of below-average quality 1 Unacceptable with none of the desired features retained This scale allows the observer to assign a rating and a corresponding numeric value to an image. After a number of observations, the results can be averaged and a particular 59 filtering algorithm can be deemed the “best”. For magneto-optic images, the SFC was used as a crude metric to compare filtering algorithms. The results of various algorithms were presented to members of the Non-Destructive Evaluation Laboratory for evaluation. The members graded the magneto-optic images on how well the various algorithms removed the noise and preserved the edges. There are a number of drawbacks to such a subjective scale. Each observer has a different opinion regarding how well a particular algorithm accomplished its goal. One person might view the retaining of a certain feature, such as edge sharpness, as acceptable, while another might feel this feature has been destroyed. For this reason, the Subjective Fidelity Criteria makes an excellent first pass at comparing filtering algorithms. The SFC can be used as a way to select algorithms for additional development and decide which images should have an objective metric applied to them. An objective metric will generate an unbiased numeric value for a certain image characteristic. 4.3 Noise Removal Image Metric (SNR) An objective metric to quantify the amount of noise in the magneto-optic image is based on the signal-to—noise ratio (SNR). The SNR is a widely used image evaluation metric and is defined for magneto-optic images as: SNR = ”—0 (4— 1) 0b where ya is the mean of an object region such as a rivet and ab is the Standard deviation of the noisy background region. The mean of the object is defined as: 60‘ 1 #0 = ; wa) <4—2) (s,t)e Sobj where Sobj is a neighborhood of size N defined in an object and f(s,t) are the pixel intensity values in this neighborhood. The denominator of the SNR is the standard deviation of the background region and is defined as: l a (p Zf2(i.j)-#b2)“ 2 (4-3) where Sbgnd is a noisy background region of size N and fig, is the mean of the background region. The standard deviation is an easily calculated parameter and is a good indicator of the smoothness of a filtered result [16]. The smaller standard deviation values correspond to less noise in the filtered result. The quick calculation of the SNR allows for a comparison of the signal strength of pertinent objects in the image to the degrading background noise. The higher SNR values correspond to images with less noise. The SNR value was used to compare the noise levels in multiple images and thereby for determining filter parameters. 61 (a) (b) Figure 4.1: (a) Raw magneto-optic image (b) Resulting image after Threshold- Averaging and post-processing stages The black squares, A and B, in Figure 4.1, enclose the background and object areas, respectively and were used for calculating the background noise standard deviation and the object mean. Table 4.2 shows the results of the SNR metric values for images processed using Threshold-Averaging and post-processing with the size of the Threshold- Averaging mask varied from 3 x 3 to 7 x 7 and moving-average filter of size 7 x 7. The SNR allowed a quantitative value to be associated with each result. 62 Table 4.2: SNR metric values for Threshold-Averaging and Moving-Average filters Background . Standard (12:22: Image Type Mask Size Deviation SNR 0' #0 b ongm‘ - 33.67 59.76 1.77 Threshold- Averaging and Post 3 x 3 6.57 59.72 9.08 Processing Threshold - Averaging and Post 5 x 5 5.00 59.70 11.93 Processing Threshold- Averaging and Post 7 x 7 4.54 59.61 13.14 Processing Moving- Average 7 x 7 4.55 59.74 13.13 The results Show that larger window sizes result in the smoother filtered images. The SNR of Threshold Averaging with a 7 x 7 window size was comparable to the one achieved with a moving-average filter with the same window size. The use of the SNR metric gives a numeric value to the amount of noise reduction in the filtered result and allows different filtering techniques to be compared. 63 (C) ((1) Figure 4.2: (a) and (c) Raw magneto-optic image (b) and (d) Resulting image after Border Removal 4.4 Edge Sharpness Metric A second metric that was developed quantifies the sharpness of the edge of an object. This value describes the edge preserving quality of each algorithm. As explained previously, edge preservation is important for data integrity and the detection of rivets with cracks. The metric developed for describing edge sharpness uses the finite difference approximation of the first order derivative across a row of an object. Across an edge, pixel values change rapidly resulting in a large difference between consecutive pixels. On a uniform intensity region, the difference between consecutive pixel values is close to zero. This property is illustrated in Figure 4.3 for an ideal step edge. The image is divided into three sections and the corresponding finite difference approximation of the first order derivative is plotted. U 1130 2CD 300 (a) (b) Figure 4.3: (a) Synthetic image consisting of three objects (b) Plot of finite difference approximation of the first order derivative Figure 4.3(b) is the plot of the difference between consecutive points. The edges between objects are visible as spikes around points 100 and 200 on the graph. Since the edge is 65 ideal, the width of the derivative spike is unity. In the case of a ramp edge, the first order derivative produces a pulsed output. The less sharp an edge, the wider the pulse in the derivative plot. Figure 4.4(a) shows three different ramp edges with increasing slopes and Figure 4.4 (b) shows their respective first order derivatives. The sharpest ramp edge resulted in the narrowest pulse as shown in Figure 4.4(b). (a) (b) Figure 4.4: (a) Ramp edges with increasing slope (b) corresponding plot of finite difference approximation of the first order derivative By measuring the width of the pulse in the derivative plot, a quantitative metric to categorize the edge preservation quality of a given algorithm can be formed. A smaller width corresponds to a sharper edge, and a better level of edge preservation for a given algorithm. Figure 4.5 shows the synthetic magneto-optic image containing a single rivet. The sharpness of the rivet edge is calculated as the average width of the edges around the rivet. This can be obtained by averaging the derivative widths around the object as illustrated in Figure 4.5. 66 Figure 4.5: Synthetic magneto-optic image with a single rivet with the edges used for determining the average The value of the edge metric was found for the previous examples and are displayed in Table 4.3. Table 4.3: Edge width values for the various filtering stages Threshold- Average Image Type Averaging Window Derivative Size Width Threshold-Averagrng 3 x 3 21,00 Threshold-Averaging 5 x 5 24.67 Threshold-Averagrng 7 x 7 27.00 Moving-Average Filter 7 x 7 30.00 Border Removal Figure 4.2(b) - 2.67 Border Removal Figure 4.2(d) ' 1033 The edge metric gives a good estimate on the edge preservation property of each stage. The Threshold-Averaging Filter with the 3 x 3 window size had the smallest average 67 derivative width, in comparison to the larger window Sizes. The increased derivative width for the larger window sizes reflects degradation in the edge preservation property. The resulting images after the Border Removal stage had the smallest derivative width, which can be attributed to the binary nature of the image. The binary image is Similar to Figure 4.3 with ideal step edges. The finite difference approximation of the first order derivative resulted in a quick and simple way to measure edge preservation for comparison purposes. 4.5 Conclusion The combination of the noise removal image metric (SNR) and the edge sharpness metric gives a quantitative value for the comparison of magneto-optic images. The SNR allows an estimate of the level of noise removal and a simple finite difference approximation of the first order derivative across an object quantifies the level of edge preservation in the image. These two companion metrics allow the filtering algorithms performed on the magneto-optic images to be compared and improved. By incorporating these metrics in a feedback loop, the most optimal result could be used for a particular data set. This leads to the hardware development section, presented in Chapter 5. 68 Chapter 5: Hardware and Software Implementation 5.1 Introduction A series of hardware and software configurations were investigated for the real- time implementation of the image processing algorithms. The development was done in three stages. The first stage consisted of the testing and development of filtering algorithms described in Chapter 3. The second stage consisted of a proof-of-concept system to verify that it was feasible to implement the ideas from the first stage in a pseudo-system similar to the M01. The third and final stage was the actual planning of a prototype system to be incorporated with the M01 system for the purpose of real-time image restoration. The ultimate goal of the hardware deveIOpment portion of the project was to design a stand-alone system that could be added on to the existing MOI system and provide filtered images to the inspector. The hardware prototype would use the developed filtering algorithms to output the filtered results onto the video display. Each stage and configuration of the hardware and software development is presented in the following sections. 5.2 Algorithm Development and Testing System The initial hardware and software system consisted of a standard PC and the Matlab scientific application software. The primary focus here was the development of filtering algorithms that met the desired noise removal specifications without consideration of the speed requirement. The data was provided in the form of digital video files for Windows (AVI) and on a VHS tape provided by PR1 Research and 69‘ Development Corporation. The VHS tape data was converted to an AVI file using Adobe Premiere software and stored on the PC. The filtering algorithms were developed off-line. Individual frames were imported and processed using algorithms developed in the Matlab environment. This Stage of processing was strictly for validation of the image processing algorithms. The flow of the system is illustrated in Figure 5.1: Imported and M01 data stored processed by on PC ___’ image processing PC algorithms developed on Matlab Output back to Figure 5.1: Program flow using the PC and Matlab work environment for filtering algorithm development This system was used strictly for the purpose of algorithm development. Though Matlab was developed for matrix computation, processing time on the system was slow because of the pixel-by-pixel filtering of the images. To increase the speed of the filtering algorithms and of the corresponding image processing, the Matlab code was converted to C source code. The built-in Matlab compiler was used and the C functions were called from the Matlab environment. This allowed the time-intensive portions of the algorithm to be processed using C, and Matlab to carry on the input and output of the data. Using Matlab software, the implementation of the Threshold-Averaging Filter with post processing took 102 seconds/frame using a 550 MHz Pentium III PC. Using the same 70 PC, with the stages of the Threshold-Averaging Filter with Post Processing implemented in C, the processing time was reduced to 2.63 seconds/frame, which was a 97% reduction in processing time. This set-up provided an environment to quickly and easily implement the filtering algorithms on various stored data sets. A number of candidate filtering algorithms were evaluated and an optimal algorithm was selected for online implementation on a proof-of-concept system. 5.3.1 Proof-of-Concept System The proof-of-concept system was developed to verify if the selected algorithms could be implemented in an on-line environment. This was a necessary intermediate step before a prototype system could be developed. The simplistic goal of the proof-of- concept was to have a streaming input of data (images) in the format exported by the actual MOI system, filter the data, and output the filtered results. It was decided to use a standard PC for the processing power behind the proof-of-concept system. The PC offered an inexpensive approach for development. An existing Dell Dimension PC with a 550 MHz Pentium III processor was selected to perform the necessary computations. 512 MB of RAM was installed on the machine to allow for faster memory allocation and performance. The color CCD video camera mounted on the imager outputs an analog video signal using the National Television Standards Committee (NT SC) Standard. To emulate the presence of the CCD video camera and its associated NT SC output, a VHS tape containing sample data in conjunction with a standard VCR were used. The VHS tape contained over seven minutes of actual data (2.0 GB when converted to a digital format) 71 taken directly from the M01 system. By playing the VHS tape, a NTSC standard video signal is passed from the composite video-out jack on the back of the VCR. The next step of the proof-of-concept system involved applying the NTSC standard signal from the VCR as input to the computer. A Hauppauge video capture board was used to convert the data from a NTSC signal to a digital format that is usable by the computer. The Impact Video Capture Board (Impact VCB) provides image capturing capabilities from an analog input, in this case the VCR. The ImpactVCB is a PCI board that is based on a Brooktree 878A video digitizer chip [17]. The ImpactVCB has the ability to capture individual frames or entire AVI files. The ImpactVCB digitizes the incoming analog NTSC signal. These digitized frames are stored in the RAM and are accessible with the use of functions written in C-code provided in Hauppauge’s Developers Toolkit. The Hauppauge Impact VCB was an inexpensive solution for digitizing the NTSC Signals and importing them into the system before filtering. An overview of the system is Shown in Figure 5.2 on the following page: 72 m noun g g< . )) II DDUU MOI data is transferred to VHS / VHS tape with MOI data Hauppauge Impact Video Capture Board\ (PCI Card) 550 MHz PIH PC Figure 5.2: Flow chart of the Proof-of-Concept System 73 The proof-of—concept system begins with sample data being recorded from the actual MOI system onto a VHS tape. The VHS tape is then played in a VCR and the analog output of the VCR is connected to the Hauppauge Impact VCB to convert the NTSC signal into a series of frames for filtering. The actual set-up of the proof-of- concept system is shown in Figure 5.3. 550 NHIz Pentium HI Dell Dimension PC with Impact VCB VCR outputting data from VHS tape Figure 5.3: Photograph of the Proof-of-Concept System The filtering is performed on the PC using a Windows application developed in Visual C++ and is discussed in the following section. 74 5.3.2 Proof-of-Concept Application Development A program was developed using Visual C++ and Video for Windows API. The program has two routines, one that grabs and buffers the incoming data from the Impact VCB and a second that filters the frames. The static filtering algorithms previously developed were implemented in C++ and incorporated into the application. A sample screen shot of the program is shown in Figure 5.4. Figure 5.4: Proof-of-Concept interface with raw image in the top window and the corresponding moving-average filtered image in the bottom window 75 The interface allows the user to select the type of filtering (static filtering techniques were discussed in this thesis). The program captures and buffers a predetermined number of frames and concurrently begins the filtering process. The buffering of frames guarantees that no frames are missed while the filtering process is occurring, ensuring the integrity of the data. The output of the filtering algorithm is then displayed on the screen along with the original raw image. The results from the proof-of-concept system were very positive. The proof-of- concept system showed that the data could be processed successfully in an on-line fashion. The only drawback was that the complicated static filtering algorithms along with the use of a general purpose personal computer led to much larger processing times than desired. Using the 5 x 5 moving-average filter as a benchmark, the processing time was 1 second/frame. The processing time could have been greatly reduced by incorporating specialized hardware designed for image processing. These boards are PCI-based and use a combination of FPGA and pipeline processing to decrease processing time. Most of the operations used in the static filtering algorithms could be performed in the 5 —10 millisecond range with the use of these specialty boards available from many image processing hardware manufacturers including Datacube and Matrox [18][19]. There was no need to invest in dedicated image processing hardware because the proof-of-concept system was developed to only verify that the system flow, techniques, and algorithms would work. However, in a future design, the use of such specialty boards in conjunction with a PC would make for a possible alternative real-time solution. The feasibility of the concept led to the planning of a prototype system. 76 5.4 Prototype System Requirements AS discussed earlier, the prototype system needs to fulfill a number of requirements. These requirements must be taken into account while constructing a plan for a prototype system. The prototype system must meet the following requirements: 0 Real-time processing capabilities Self-sufficient stand-alone unit Relatively small in size Analog signal data acquisition and output Programmable Memory capabilities - Cost-efficient The first requirement of real-time processing capabilities is extremely important for the processing of the images. The prototype system must be able to take the input frame, process and output a filtered frame in real-time (30 frames/second). This is a necessary condition for ensuring that the inspector sees the filtered image instantaneously. Any delay in the filtered output would adversely affect the present inspection procedure. Real-time processing will allow the inspections to be canied out at the current speed. The condition of using a stand-alone unit is necessary for the integration of the prototype board and the M01 system. The prototype has to be assimilated into the existing system with the least amount of changes as illustrated in the following diagram. 77 Input to existing > 1:022:16 ._,> video displays )’ (head-mounted Output from the OUtP“t unit and existing MOI from monitor) System Prototype Figure 5.5 : Overview of prototype insertion into existing MOI system The goal is to minimize the modification and therefore the cost of the existing MOI unit. The actual MOI system consists of only a sensor, excitation unit and external displays. Anything larger than these existing components would restrict the portability of the overall system. Analog data acquisition is very important to the prototype concept. The data sent from the M01 system is analog video using the NTSC standard. The analog signal must be decoded and converted into a usable digital format for computation purposes regardless of the processor type. After the data is processed, it must be encoded back to the NTSC standard. These two steps will make up the initial and final stages of any prototype system, respectively. There will always be a need to change and modify the design parameters of the filtering algorithm that is being implemented. Hence the system must be programmable. The filtering algorithms must be implemented in a programmable language. This leads to the memory requirement. There must be memory for storing the filtering algorithm code and the images. If a filtering algorithm takes anymore than 1/30 of a second, the next frame needs to be buffered. There are also dynamic filtering algorithms that are being 78 developed that use multiple frames for computational purposes. There must be memory available to store these additional frames. The final requirement is cost-efficiency. The M01 is commercial product and therefore must stay cost-competitive. The prototype system must be economical so that it can be implemented in mass into the commercial systems with only a minor fluctuation in price. The requirements listed above are all equally important in the design of a prototype system. There are two possible approaches that meet these requirements, namely DSP and FPGA-based systems. 5.5.1 Field Programmable Gate Array (FPGA)-Based Prototype System The prototype requirements present a unique challenge for implementing a powerful but cost-effective solution for image restoration. A system that has been used in a number of image processing solutions is the Field Programmable Gate Array (FPGA) [32]. A FPGA contains numerous gates that can be connected to form adders, multipliers and other basic logic blocks. These blocks provide the core of any processing algorithm. The programs are developed using hardware description language (HDL) design or schematic capture software. The algorithm can be modified by making the necessary changes, compiling the program and reloading it onto the FPGA board [20]. FPGA image processing systems have been used to implement real-time image processing algorithms in a reconfigurable format [21]. A schematic of a general FPGA architecture is shown in Figure 5.6 [34][38]. 79' FPGA Architecture Legend M I ‘ ‘ 7/ Configurable W ' 1 ' g //////A Logic Block N [ 1 1/0 blocks Figure 5.6: General FPGA architecture [38] In this FPGA architecture, input/output pins surround the outside of the layout and are connected to the logic blocks by a grid of interconnects. The logic blocks are also connected to each other by interconnects, allowing for the outputs of different logic blocks to be routed to the inputs of other logic blocks. In general, there are three different routing configurations used for the interconnection of configurable logic blocks and I/O blocks. The first is general-purpose interconnects where signals are routed through switch matrices until they reach the desired logic block [38]. The second is direct interconnection where the adjacent logic blocks are directly connected. The third form of interconnection is long line, where vertical and horizontal lines are used to connect blocks on the far edges of the board [38]. For example, to add two, four bit inputs, a chain of full adders could be imported from a standard library in a schematic 80 capture program, and connected accordingly. A netlist could then be formed from the schematic and downloaded to the FPGA. The logic blocks in the FPGA would now have the functionality of four full adders allowing for the addition of the inputs. A FPGA board can process images at very high rates of Speed and reconfigurable FPGA hardware systems have been used in the television industry to implement various enhancement algorithms in real-time for live broadcasts [22]. Computer vision systems have been developed using FPGAS to implement pattern recognition and edge detection algorithms at real-time speeds [23]. In implementing an adaptive Kalman filter for medial imaging, 1024 x 1024 images were proceSsed at Speeds of over 60 frames/second, which exceeds the real-time rate we are trying to achieve [24]. A FPGA can be incorporated into an existing computer by use of a PCI interface or can operate as a stand-alone system based on the type of FPGA board selected. The FPGA board used in this prototype should be of stand-alone variety to fulfill the requirement of self-sufficiency. Extra memory such as SRAM could also be integrated in the FPGA by use of external interfaces. This memory will allow for the storage of. the buffered images. In scenarios where the speed of the FPGA exceeds 30 frames/second, an output buffer will be needed. This will ensure that a bottleneck does not occur in the FPGA and that images are not lost after processing. The input/output data interfaces will have to occur through the use of a NTSC decoder and NTSC encoder. A decoder converts the analog video signal from the NTSC standard to a digital format usable by the FPGA. After the completion of the filtering algorithms, the processed data will be encoded from the digital format back to the NTSC standard and passed to the video display devices. An overview of the proposed system is shown in Figure 5.7: 81 Memory buffer for Output Analog Video input images 3‘3““ L NT SC NTSCr Decoder Encode Memory buffer for output images Figure 5.7: Overview of the prototype system flow using a FPGA 5.5.2 FPGA-Based Prototype Development Boards A number of development kits can be purchased that have all the capabilities of the described prototype system. These multimedia development kits contain the necessary interfaces, encoders, decoders, RAM memory and FPGA to implement and test the system. Xilinx offers a development board based on their Virtex-II FPGA. This board has been developed for multimedia applications and contains composite video input and output, and five external memory banks to act as buffers. These composite input and output jacks would provide the conversion of the analog and digital signals. The development kit also includes the necessary PC interfacing and software tools for the development of filtering algorithms [25]. A Similar development kit from the Xess Corporation incorporates the same basic interfaces as above but uses the Xilinx Spartan IIE FPGA [26]. This particular FPGA is interesting because it is one of the most cost-effective FPGAS on the market and was 82 designed for use with video images. The architectural layout of a single block of the Spartan 11E FPGA architecture is illustrated in Figure 5.8. DLL I/O Logic I/O MEMORY Logic BLOCK (RAM) CLB Figure 5.8: Architectural layout of a single quadrant of the Spartan IIE FPGA [27] This quadrant would be repeated at each of the four comers of the chip and is similar to the general FPGA architecture shown in Figure 5.6. The RAM would be located at each end separated by the controllable logic blocks (CLB). Programmable I/O logic blocks encompass the RAM and CLB and four delay-locked loops (DLL) make up the comers of the chip. The dedicated I/O logic blocks improve data transfer rates and ensure the integrity of the incoming data. The Spartan IIE can be customized for the number of logic gates, pins and memory capacity based on the requirements of the particular application [27]. The development kit contains a pair of on—board encoders and decoders for the conversion of the analog video Si gnals. These provide the input and output interfacing as previously discussed. The board also has a number of expansion buses for adding other components and modules. Both boards offer interesting platforms for the 83 development of the FPGA prototype system and portions of each could be incorporated into the final design. The most cost-effective approach for the prototype system would be to design a board using only the components from the development board that were strictly necessary. This would make for a smaller, cheaper system. An application-specific Specialty board would have to be developed containing the encoder, decoder, RAM, FPGA and interfaces. Each component is available for purchase separately. The Phillips encoder and decoder are packaged in a 14 x 14 x 1.4 mm, 100 pin layout. The FPGA can be purchased separately with a specified number of I/O pins, logic gates, and memory units. The FPGA would then be incorporated onto an application-specific Specialty board and manufactured with only the desired components and mass-produced for use in the existing MOI system. An overview of the actual prototype system is shown below with block layouts of the necessary components to build the prototype FPGA. Composite NTSC FPGA Video Decoder Input Jack Composite NTSC Video Encoder Output Jack Power Routing and necessary Input Jacks surface components RAM Memory Figure 5.9: Block layout of an application-specific Specialized board for the prototype system 84 By combining the components in Figure 5.9, a Stand-alone prototype system could be developed at a very cost-effective price. This would allow for real-time image processing to be incorporated into the commercial MOI system. 5.6.1 Digital Signal Processor (DSP)-Based Prototype System A programmable DSP could be used in place of the FPGA, providing an alternative approach for the prototype system. A DSP is a microprocessor that can be programmed using C or assembly language. The simplest of processors used in a DSP system usually consist of an arithmetic-logic unit (ALU), shifter and a multiply- accumulate unit [35]. These execution units operate in parallel to allow for real-time processing of data. The ease of programming or incorporating existing C code into the system, makes the use of a DSP board advantageous to many developers. A DSP board is useful for computing math-intensive functions and for evaluating conditional logic statements. DSP systems have been used in medical image processing including the computation of cross-sectional images of the aorta in real-time [28]. The use of such systems in a real-time environment makes a DSP-based approach a candidate for the prototype system. The requirements mentioned for the FPGA-based system hold for the DSP including the need for a stand-alone system for portability. The NT SC standard signal conversions will be realized in the same manner by use of a NTSC decoder and encoder. The DSP will require memory units for the buffering of incorrring frames. The flow of the DSP system is illustrated in Figure 5.10. 85 M o uffe r Output Analog Video em ry b r f0 input images 5‘3““ NT SC Decoder Encoder Memory buffer for output images Figure 5.10: Overview of the prototype system flow using a DSP 5.6.2 DSP-Based Prototype Development Boards There are a number of multimedia development kits available that incorporate video interfaces with the DSP for image processing. Texas Instruments (TI), the leader in DSP technology and development, offers a number of different DSP chips and development kits to fulfill the real-time image processing requirements. The boards have similar interfaces and features as mentioned in the FPGA section, but now the FPGA has been replaced with a DSP. The use of such boards will allow for the development of a real-time prototype system. A cost-effective prototype system could be built without the use of a development system. As with the FPGA, only the necessary components from the development system would be incorporated onto this application-specific specialty board. The specialty board would contain the video interfaces, memory, and power connections Similar to the block layout in Figure 5.9, with a DSP replacing the FPGA. The 86 development of such a board would be a cost-effective real-time solution for the commercial MOI. 5.7 DSP-based Development Board Prototype System A prototype system was developed using the TMS320VC5510 DSP Development System (DSK) from Spectrum Digital Inc. The DSK features the hi gh-perforrnance, low- power, fixed-point TMS320VC5510 DSP from Texas Instruments, part of the TI C55x DSP family. The TMS320VC5510 DSP has an operating speed of 200 MHz and can perform 400 MMACS. The DSK has 8 MB of DRAM and 512 KB of non-volatile Flash memory on board. The board also has a pair of composite input/output jacks and an A1C23 codec for sending and receiving analog signals. The DSK is programmed using Code Composer Studio (CCS), a development environment for programming DSKs and DSPs. The CCS allows the user to write programs in C and upload them to the DSK via a USB port. The benchmark 5 x 5 moving-average filter was developed using the CCS and downloaded to the DSK. The DSK was used to process 128 x 128, 120 KB, grayscale magneto-optic images. Using the DSK, the magneto-optic image was processed in 0.012246 seconds, which is well within the real-time requirement of 30 frames/second. The success of the DSK shows that it is feasible to develop a near real- time image processing system using a DSP platform. The next Step would involve incorporating the NTSC encoder/decoder for the analog NTSC signal input/output onto an application-specific board along with the necessary memory components as mentioned in section 5.6. 87' 5.8.1 DSP and FPGA Comparisons Both the DSP and FPGA-based systems would allow for real-time image processing of the magneto-optic images. Four requirements were considered for evaluating the two architectures for this particular image processing system [29]: 0 Speed 0 Development Time 0 Programmability - Price This criterion was used to compare the DSP and FPGA and determine which was the best fit for our needs. 5.8.2 Speed The computational speed was an important factor because of the real-time processing requirement. The FPGA offers the best performance in terms of processing speed and works extremely well at handling multiple data channels. The FPGA uses a parallel processing architecture, as compared to the sequential architecture of the DSP, to accelerate processing speed and reduce processing time [30]. For some applications, the FPGA can be order of magnitudes faster than a comparable DSP [37]. While the FPGA has a higher level of performance, it is considered a hardware solution because of its use of programmable logic gates. The DSP on the other hand, is considered a software solution, and has the best real-time performance among programmable solutions. Using the performance index in terms of millions of operations per second (MIPS) or millions of multiply accumulates per second (MMACS), the DSP’S speed is comparable to the 88 FPGA [29]. Using the Fast Fourier Transform and a voice synthesis compression algorithm as benchmarks, the DSP even outperformed the FPGA in some case studies [31]. In general, FPGA has higher computational speeds as compared to the DSP, but the margin is small or nonexistent in many applications. The DSP and FPGA both equally satisfy the speed requirement. 5.8.3 Development Time and Programmability The development time was the second requirement and was very important for this application. The development of the prototype must be rapid to meet the project deadlines and leave adequate time for testing. The development time is coupled with the programmability of the FPGA or DSP. Faster and easier programming will reduce the overall development time. The ease of software programming of the DSP makes it more advantageous for this particular application. The filtering algorithms have previously been developed in C and Matlab, allowing for a quick conversion for use with the DSP. The hardware description language used by the FPGA will take longer to implement and increase development time because of the lack of familiarity. Programming of the DSP is also more flexible than that of the FPGA and can perform a broader range of mathematical Operations as well as conditional logic operations. Many of the filtering algorithms contain conditional logic, which is easier to implement in C than I-lDL, making the DSP a better choice. In analyzing various architectures and implementations, the DSP had the fastest overall development time [29][30][37]. The programming flexibility is very important for this application because there are multiple filtering algorithms that need to be implemented and tested. Changing program parameters is a 89 multi-step process for both DSP and FPGA. With the FPGA, the schematic or HDL is changed and the netlist is regenerated and downloaded to the FPGA through an interface such as a serial or parallel port. With the DSP, parameter changes are made in the code and the program is recompiled. The code is then downloaded to the DSP by the use of serial, parallel, USB or similar interface. The major drawback of the DSP is that the program must switch tasks to read data, which can slow the overall processing time, though I/O handling improvements have been made to rrrinirnize this problem [30]. The previous development of the filtering algorithms and the user’s familiarity with C and similar programming languages makes the DSP a more suitable choice in terms of ease of programming and development time. 5.8.4 Price The final requirement is cost effectiveness. In general, a DSP is cheaper than an FPGA both in terms of individual chips and development boards. Using benchmarks of the Fast Fourier Transform, LPC Synthesis Filter, and a Viterbi decoding algorithm, the perforrnance-to-cost ratio of the DSP is better than that of the FPGA by a factor of two [31]. The DSPoffers the best combination of price, performance, and development time for real-time applications [29]. The DSP was the most optimal solution for the prototype design. It offered a faster development time and an easier programming environment. While the FPGA, in general, had a higher level of performance, the performance-to-cost ratio of the DSP made it the ideal choice for implementing a prototype system for this application. 90 5.9 Conclusion The use of a stand-alone FPGA or DSP-based system would allow for a real-time image processing prototype system to be developed. Each system has its advantages and meets the requirements for implementation into the commercial MOI. Based on the established criterion, the DSP-based system was chosen for implementation. The system would have the capability to read in a NT SC standard analog video signal, filter the image, and output the filtered result in NTSC standard format all in under 30 frames/second. The designing of a single application-specific board and purchasing the individual components separately would make for a very small, cost-effective design. This prototype could then be incorporated into the existing MOI system to provide on- the-fly image restoration capabilities. 91 Chapter 6: Conclusion and Possible Future Work 6.1 Conclusion The magneto-optic images produced by Magneto-Optic/Eddy Current Imager provide a unique challenge for developing image processing techniques. The image enhancement algorithms that were selected were required to remove the unwanted serpentine noise from the images while preserving the sharpness of object edges. The noise reduction allows smaller defects to be detected thereby enhancing the POD of the MOI system. The algorithms were simple enough to allow for an inexpensive, near real- time hardware implementation. This hardware prototype could eventually be integrated into the commercial MOI for real-time image processing of the incoming stream of magneto-optic images. A number of spatial filtering techniques such as the median, moving-average, arithmetic, geometric, and harmonic mean filters were investigated. While the noise removal in the resulting images was acceptable, the level of edge preservation was not. The lack of edge preservation led to the development of the Multistage Filtering Algorithm. The first stage consisted of a Nonlinear Gray Level Mapping Algorithm (NLM) for contrast enhancement. The second stage was a Threshold-Averaging Filter and the resulting images were free of the serpentine noise that plagued the original images and contained a higher level of edge preservation. Post-Processing stages were incorporated into the algorithm to further remove unwanted noise. The optional fourth stage was a border removing algorithm for eliminating the dark border regions, a characteristic of many magneto-optic images. Image metrics were developed for 92~ comparison purposes. The signal-to-noise ratio (SNR) was used to quantify the performance of noise removal in the filtered results. A larger SNR value corresponded to a resulting image containing less serpentine noise. A second companion metric was also developed to quantify the level of edge preservation that occurred. An average sharpness of the edge was calculated from the slope of the first order derivative at the object edges and used to indicate the degree of edge preservation. A small average derivative width corresponded to a sharp edge and an image with a high level of edge retention. These metrics were used to compare the different filtering methods and could be used to adaptively optimize filtering parameters. The filtering algorithms were implemented in a proof-of-concept system. This proof-of-concept system was designed to simulate the actual M01 and was used to verify the functionality of the filtering algorithms and the overall system flow with respect to processing results and computational time. After completing the proof-of-concept system, a prototype system design was developed. The prototype system had to meet the following requirements: 0 Real-time processing capabilities Self-sufficient stand-alone unit Relatively small in size Analog signal data acquisition and output Programmable Memory capabilities Cost-efficient 93 After careful analysis of these requirements and a review of previously developed image processing systems, it was determined that a FPGA or DSP-based system would fulfill our needs. Both the DSP and FPGA-based systems would allow for real-time image processing of the magneto-optic images. A criterion was developed for choosing the correct architecture for this particular system and was used to differentiate between the DSP and FPGA. The four requirements for evaluating the image processing architectures were: 0 Performance 0 Development Time 0 Programmability - Price After evaluation of the criterion, it was determined that the DSP was the most optimal solution for the prototype design. It offered a faster development time and an easier programming environment. While the FPGA had a higher level of performance, the DSP had the best real-time performance among programmable solutions and using the performance index of millions of operations per second (MIPS) or millions of multiply accumulates per second (MMACS), it was comparable to the FPGA. The performance- to-cost ratio of the DSP made it the ideal choice for implementing the prototype system. A real-time image processing system was developed using a DSP development system. Based on these results, a cost-effective prototype system could be built with the design of an application-specific board. The board would consist of the DSP, NTSC encoder/decoder, video interfaces, memory, power connections and other necessary 94 components. The specialty board would be integrated into the commercial MOI, allowing for near real-time image processing of the magneto-optic images. 6.2 Possible Future Work There a number of areas for future work especially in terms of hardware implementation. Both FPGA and DSP-based application-specific boards could be built based on the designs in Chapter 5. An Application-Specific Integrated Circuit (ASIC) could also be designed and built based on the specifications presented in this thesis. The ASIC would only be cost-effective if it was produced in large quantities. Comparison of the systems could be performed and the most optimal design, based on the prototype system requirements listed in section 5.4, could then be used in a commercial design. Another idea is the use of multiple hardware filtering modules to allow the inspector to choose or scan different filtering algorithms. An automated system could also be implemented to allow an application to pick the most optimal parameters or algorithms based solely on the image quality metrics presented in Chapter 4. The filtering algorithms themselves could also be used in other image enhancement projects where the images contained similar noise characteristics. There are many different areas for future work both in terms of hardware implementation and image processing techniques. 95 ll] [2] [3] [4] [5] [6] [7] [8] [9] [10] [ll] BIBLIOGRAPHY B. Raj and CV. Subramanian, “Nondestructive Testing of Welds,” Materials Evaluation, pp. 1251-1254, Dec. 1999. D. E. Bray and R. K. Stanley, Nondestructive Evaluation. New York: McGraw- Hill, 1989. K. G. Boving, NDE Handbook. London: Butterworths, 1989. G. L. Fitzpatrick, D. K. Thome, R. L. Skaugest, E. Y. C. Shih, and W. C. L. Shih, “Novel Eddy Current Field Modulation of Magneto-Optic Garnet Films for Real- Time Imaging of Fatigue Cracks and Hidden Corrosion,” Physical Research, Inc., Torrance, California, 1993. V. J. Brechling and F. W. Spencer, “The Validation Process as Applied to the Magneto-Optic/Eddy Current Imager (MOI),” Materials Evaluation, pp. 815-818, July 1995. G. L. Fizpatrick, D. K. Thome, R. L. Skaugset, E. Y. C. Shih, and W. C. L. Shih, “Magneto-Optic/Eddy Current Irr‘raging of Aging Aircraft: A New NDI Technique,” Materials Evaluation, pp. 1402-1407, Dec. 1993. J. K. Hartman and P. W. Kamer, “Magneto-Optic Inspection of Tight Stress Corrosion Cracks in Ferromagnetic Material,” Thiokol Corporation, Brigham City, Utah. L. Xuan, B. Shankur, L. Udpa, W. Shih, and G. Fitzpatrick, “Finite Element Model For MOI Applications using A-V Formulation," Review of Progress in Quantitative Nondestructive Evaluation, D. 0. Thompson and D. E. Chimenti, Eds., Vol. 20, American Institute of Physics, pp. 385-391, 2001. M. N. Deeter, G. W. Day, T. J. Beahn, and M. Manheimer, "Magneto-optic magnetic field sensor with 1.4 pT/Hle minimum detectable field at 1 kHz", Electronics Letter Vol 29, pp. 993-994, May 27, 1993. P. Ramuhalli, J. Slade, U. Park, L. Xuan, and L. Udpa, “Modeling and Signal Processing of Magneto Optic Images for Aviation Applications,” as presented at International Conference on Smart Materials Structures and Systems, Bangalore, India, July l7-l9 2002. D. K. Thome, “Development of an Improved Magneto-Optic/Eddy Current Imager,” US. Department of Transportation, Federal Aviation Administration, Apr. 1998. 96 [12] [13] [14] [l5] [16] [17] [18] [19] [20] [21] [22] [23] [24] [25] S. Simms, “MOI: Magneto-Optic/Eddy Current Imaging,” Materials Evaluation, pp. 529-534, May 1993. N. Eua-Anant, “A Novel boundary extraction algorithm based on a vector image model,” ISU Thesis, Ames, IA, 1996. R. Gonzalez and R. Woods, Digital Image Processing. Upper Saddle River, New Jersey: Prentice Hall, 2002. J. Lim and S. S. Udpa, “Defect Extraction in M01 Image Based on Waverlet Packet Transform and Morphological Technique,” Review of Progress in Quantitative Nondestructive Evaluation, D. 0. Thompson and D. E. Chimenti, Eds., Vol. 20, American Institute of Physics, pp. 611-618, 2001. H. Chen, A. Li, L. Kaufman and J. Hale, “A Fast Filtering Algorithm for Image Enhancement,” IEEE Transactions on Medical Imaging, vol. 13, no. 3, pp. 557- 564, Sept. 1994. Hauppauge Computer Works, www.hauppauge.com, Hauppauge, New York, US. Datacube Inc., www.datacube.com, Danvers, Massachusetts, US. Matrox Inc., www.matrox.com, Dorval, Quebec, Canada. J. Smith and S. Liebman, “Image Acquisition, real-time processing and display with FPGA,” Electronic Engineering, pp. 82-84, Sept. 1998. S. Klupsch, M. Ernst, S. Huss, M. Rumpf and R. Str'zodka, “Real Time Image Processing based on Reconfigurable Hardware Acceleration”, presented at Proceedings Workshop on Heterogeneous Reconfigurable Systems on Chip, April 2002. K. Henriss, P.Ruffer, R. Ernst, and S. Hasenzahl, “A Reconfigurable Hardware Platform for Digital Real-Time Signal Processing in Television Studios” Proc. 2000 IEEE Symposium on F ield-Programmable Custom Computing Machines, pp. 285-286, 2000. I. Kramberger and M. Solar, “DSP Acceleration Using a Reconfigurable FPGA,” presented at 1999 IEEE International Symposium on Industrial Electronics, July 12-16, 1999. R. D. Tumey, A. M. Reza, and J. G. R. Delva, “FPGA Implementation of Adaptive Temporal Kalman Filter for Real Time Video Filtering,” Core Solutions Group, Xilinx, Inc., San Jose, California, Mar. 1999. Xillinx Inc., www.xilinx.com, San Jose, California, US. 97' [26] [27] [28] [29] [30] [31] [32] [33] [34] [35] [36] [37] [38] Xess Inc., www.xess.com, Apex, North Carolina, US. “Spartan-IIE 1.8V FPGA Family: Introduction and Ordering Information,” Product Datasheet, Xilinx Inc., San Jose, California. V. Gemignani, M. Derrri, M. Patemi, A. Benassi, “A DSP-Based System for Real- Time Analysis of Medical Image Sequences,” Proc. of the International Conference on Signal Processing Applications & Technology, pp. 1-5, 2000. L. Adams, “Choosing the Right Architecture for Real-Time Signal Processing Designs,” Texas Instruments, Dallas Texas, Nov. 2002. R. C. Restle, “Math on Digital Signal Processors (DSPS) and Field Programmable Gate Arrays (FPGAS),” Wyle Electonics, Irvine, California. S.Kbhler, S.Sawitzki, A.Gratz, R.G.Spallek; “Digital Signal Processing with General Purpose Microprocessors, DSP and Reconfigurable Logic,” II IPPS/SPDP'99 Workshops Held in Conjunction with the 13th International Processing Symposium and the 10th Symposium on Parallel and Distributed Processing, 1999. R. D. Tumey and C. H. Dick, “Real Time Image Rotation and Resizing, Algorithms and Implementations,” Core Solutions Group, Xilinx, Inc., San Jose, California. W. C. L. Shih, “Optimization of the Magneto-Optic/Eddy Current Imager (MOI) for Detection of Subsurface Corrosion,” Physical Research, Inc., Torrance, California, 1994. V. George and J. M. Rabaey, Low-Energy F PGAs, Boston: Kluwer Academic Publishers, 2001. P. Lapsley, J. Bier, A. Shoham, DSP Processor Fundamentals Architectures and Features, New York: Wiley-IEEE Press, 1997. S. S. Udpa and L. Udpa, “Eddy Current Nondestructive Evaluation,” Encyclopedia of Electrical and Electronics Engineering, John G. Webster, Ed., John Wiley, 1999. R. C. Restle, “Choosing between DSPs, FPGAS, uPs and ASICS to implement digital signal processing,” presented at Proc. of the International Conference on Signal Processing Applications & Technology, 2000. C. H. Roth, Jr., Digital Systems Design Using VHDL. New York: PWS Publishing Company, 1998. 98 *‘_ a. .M.._.........Mw_.-n-.a__.- _._-