MULTI-MODALITY NONDESTRUCTIVE EVALUATION TECHNIQUES FOR INSPECTION OF PLASTIC AND COMPOSITE PIPELINE NETWORKS By Mohand Alzuhiri A DISSERTATION Submitted to Michigan State University in partial fulfillment of the requirements for the degree of Electrical Engineering – Doctor of Philosophy 2022 ABSTRACT MULTI-MODALITY NONDESTRUCTIVE EVALUATION TECHNIQUES FOR INSPECTION OF PLASTIC AND COMPOSITE PIPELINE NETWORKS By Mohand Alzuhiri The extensive adoption of plastic pipelines is a growing phenomenon in different fields of the industry, with applications that extend from municipal water and sewer systems to the water lines in nuclear reactors. The large-scale adoption is motivated by the unique features of plastics like corrosion and chemical resistance, low cost, and design flexibility. While the dielectric nature of plastic pipelines provides unique design capabilities, it also introduces new challenges for the operators when it comes to inspecting and ensuring the integrity of these pipelines’ networks. In this study, a multi-modal approach is adopted to address the threats affecting the safety of small diameter plastic pipelines and explore possible inspection solutions for emerging materials like composites. Structured light endoscopes with RGB-D inspection capability were developed for the inspection of surface defects in small diameter pipelines with novelties a) Design and miniaturization of RGB-D structured light sensor with electronic stabilization, b) Development of an algorithm to automatically calibrate the sensor when placed in a cylindrical environment, c) Design of a single shot phase measurement SL sensor that employs the sensor movement to improve the 3D reconstruction, and d) Design a stereoscopic SL sensor for 360-degree inspection. EM-based inspection was adopted to inspect subsurface defects and classify materials around the inspected pipelines. An investigative study was performed to test the probability of detecting cold fusion in butt fusion joints by using emerging NDE techniques, and a coplanar capacitive sensor was designed for the detection of legacy crossbores in gas pipelines. Finally, a thermoacoustic imaging system was developed in this study with potential applications for the inspection of composites and medical imaging. The novelties of this work can be summarized as follows: a) Development of a simulation model to study the thermoacoustic waves generation and the effect of multiple experimental parameters on the performance of thermoacoustic imaging systems, b) Improving the signal to noise ratio of pulsed TAI imaging systems by adoption non- coherent pulse compression. In summary, this study presents a multi-modal approach for the inspection of pipeline networks by adopting optical RGB-D imaging sensors for surface inspection, EM-based sensors for subsurface inspection and classification of objects outside the pipe, and finally, a hybrid imaging method with potential applications in medical imaging and inspection of composites. Copyright by MOHAND ALZUHIRI 2022 I dedicate this work to my father, mother, lovely wife, and my family who supported me throughout my life journey, and motivated me to challenge all the difficulties and become the person I am. v ACKNOWLEDGEMENTS I want to express my sincere gratitude to every person who has been there for me and helped throughout my PhD journey. I want to thank my advisor and mentor, Dr. Yiming Deng, for his extraordinary support, guidance, and encouragement during my PhD study. He always believed in me and supported me. His support was not limited to my research but extended to my career and life. So, I feel fortunate to work with such a knowledgeable and compassionate advisor. I want to thank my PhD studies committee: Dr. Lalita Udpa, Dr. Antonello Tamburrino, Dr. Prem Chahal, and Dr. Yang Yang, for their time, guidance, and suggestions. I want to thank my colleagues at NDEL for their support, help, and thoughtful discussions. Finally, I would like to express my gratitude to my wife and my family for their unwavering support and encouragement. vi TABLE OF CONTENTS LIST OF TABLES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xii LIST OF FIGURES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii LIST OF ALGORITHMS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxv KEY TO ABBREVIATIONS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxvi CHAPTER 1 INTRODUCTION . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.1.1 Slow Crack Growth . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 1.1.2 Cold Fusion in Butt Fusion Joints in Plastic Pipelines . . . . . . . . . . . . 3 1.1.3 Cross Bores . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 1.2 Challenges and Proposed Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . 4 1.3 Scope of work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 1.3.1 Optical RGB-D imaging . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 1.3.1.1 SL Sensor Miniaturization and Electronic Point Cloud Stablization 6 1.3.1.2 RGB-D Inspection with Automatic Calibration Capability . . . . 6 1.3.1.3 Stereoscopic Endoscopic SL Sensor for 360-degree profiling . . . 7 1.3.1.4 Movement Enhanced Phase Measurement SL Sensor . . . . . . . 7 1.3.2 Thermoacosutic Imaging . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 1.3.3 An Investigative Study to Detect Cold Fusion in Butt Fusion Joints . . . . . 8 1.3.4 A Capacitive Inspection Sensor for the Detection of Legacy Cross Bores . . 9 CHAPTER 2 AN INTRODUCTION TO OPTICAL INSPECTION AND 3D PROFILING 10 2.1 3D Profiling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 2.2 Pinhole Camera Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 2.3 Image Distortion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 2.4 Stereo Camera Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 CHAPTER 3 MULTI-COLOR MULTI-RING STRUCTURED LIGHT SENSOR WITH AUTOMATIC STABILIZATION . . . . . . . . . . . . . . . . . . . . . . . 17 3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 3.2 Endoscopic SL Sensor Design and Fabrication . . . . . . . . . . . . . . . . . . . . 19 3.2.1 Sensor Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 3.2.2 Pattern Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 3.2.3 Sensor Fabrication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 3.3 Simulation Environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 3.4 3D Surface Reconstruction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 3.4.1 Sensor Calibration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 3.4.2 Image Segmentation Decoding . . . . . . . . . . . . . . . . . . . . . . . . 28 3.4.3 Sensor Stabilization and Point Cloud Registration . . . . . . . . . . . . . . 29 vii 3.5 Discussion and Performance Evaluation . . . . . . . . . . . . . . . . . . . . . . . 34 3.5.1 Inspection of PVC White Pipe . . . . . . . . . . . . . . . . . . . . . . . . 34 3.5.1.1 Artifacts Correction . . . . . . . . . . . . . . . . . . . . . . . . 35 3.5.1.2 Effect of Blind Spots . . . . . . . . . . . . . . . . . . . . . . . . 35 3.5.1.3 Evaluation of Error in the Final 3D Profile . . . . . . . . . . . . 37 3.5.2 Inspection of MDPE Yellow Pipe with Non-Stabilized Sensor . . . . . . . . 37 3.6 Small Diameter Sensor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 3.7 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 CHAPTER 4 RGB-D SL SENSING WITH AUTOMATIC CALIBRATION . . . . . . . . 42 4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 4.2 Sequential RGB-D Sensor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44 4.2.1 Sensor Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44 4.2.2 Sensor Synchronization . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46 4.3 Sensor Calibration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50 4.3.1 Reference Plane Based Calibration (RPBC) . . . . . . . . . . . . . . . . . 51 4.3.2 Automatic Calibration . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52 4.3.3 Correction of Artifacts . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60 4.4 Experimental Validation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63 4.4.1 Conventional Calibration . . . . . . . . . . . . . . . . . . . . . . . . . . . 63 4.4.2 Autocalibration with Multiple Rings Projection . . . . . . . . . . . . . . . 65 4.4.2.1 Auto Calibration with a Single-Ring Projection . . . . . . . . . . 65 4.4.2.2 Autocalibration with Multiring Projection . . . . . . . . . . . . . 66 4.4.3 Correction of Artifacts . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67 4.4.4 Performance Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68 4.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70 CHAPTER 5 STEREOSCOPIC ENDOSCOPIC SL SENSING FOR 360 DEGREE IN- SPECTION . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71 5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71 5.2 SL Sensor Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72 5.3 Sensor Calibration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79 5.4 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83 CHAPTER 6 MOVEMENT ENHANCED SL SENSING WITH MOTION INDUCED PHASE SHIFTING . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84 6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84 6.2 Movement Based Digital fringe Projection . . . . . . . . . . . . . . . . . . . . . . 85 6.3 Oscillation Reduction by Optimization . . . . . . . . . . . . . . . . . . . . . . . . 88 6.4 Correction by Using Stereo . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91 6.5 Simulation Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92 6.6 Effect of Nonlinear Phase Projection . . . . . . . . . . . . . . . . . . . . . . . . . 94 6.7 Features Extracted From the Sensors . . . . . . . . . . . . . . . . . . . . . . . . . 94 6.8 SL Sensor Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96 6.8.1 Rectangular Geometry Sensor . . . . . . . . . . . . . . . . . . . . . . . . 96 viii 6.8.2 Cylindrical Geometry Sensor . . . . . . . . . . . . . . . . . . . . . . . . . 96 6.8.2.1 Model A . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97 6.8.2.2 Model B . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99 6.8.2.3 Model C . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101 6.9 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102 CHAPTER 7 THERMOACOUSTIC IMAGING: A SIMULATION STUDY . . . . . . . . 104 7.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104 7.2 Excitation Sources for Thermoacoustic Imaging . . . . . . . . . . . . . . . . . . . 107 7.2.1 Photoacoustic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107 7.2.2 Microwave Induced Thermoacoustic . . . . . . . . . . . . . . . . . . . . . 107 7.2.3 Magnetically Mediated Thermoacoustic . . . . . . . . . . . . . . . . . . . 107 7.3 Thermoacoustic Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108 7.4 Excitation Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109 7.5 Coexistence With Other Physical Phenomena . . . . . . . . . . . . . . . . . . . . 110 7.6 Numerical Modeling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111 7.6.1 Model Geometry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112 7.6.2 Model Validation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113 7.7 Image Reconstruction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114 7.8 Case Studies and Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117 7.8.1 Simulation Study Of Image Reconstruction With Time Reversal . . . . . . 117 7.8.2 Effect of Material Conductivity . . . . . . . . . . . . . . . . . . . . . . . . 118 7.8.3 Effect of EM Pulse Width . . . . . . . . . . . . . . . . . . . . . . . . . . . 121 7.8.4 Open Ended Waveguide vs Horn Antenna . . . . . . . . . . . . . . . . . . 122 7.9 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123 CHAPTER 8 THERMOACOUSTIC IMAGING WITH CODED PULSE EXCITATION . 124 8.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124 8.2 Theory of NPC Enhanced TAI . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125 8.2.1 Fundamentals of Thermoacoustic Imaging . . . . . . . . . . . . . . . . . . 125 8.2.2 Noncoherent Pulse Compression Technique . . . . . . . . . . . . . . . . . 127 8.2.2.1 Mismatched Filter . . . . . . . . . . . . . . . . . . . . . . . . . 129 8.2.2.2 Complementary Manchester Coded Pairs . . . . . . . . . . . . . 130 8.3 Image Reconstruction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131 8.4 Measurement Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133 8.5 Results and Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135 8.5.1 NPC on 1-D Signal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135 8.5.2 NPC on 2-D images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137 8.5.2.1 Reduction of the acquisition time . . . . . . . . . . . . . . . . . 138 8.5.2.2 Reduction of the system peak power . . . . . . . . . . . . . . . 139 8.5.2.3 Increasing the system spatial resolution while maintaining high SNR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140 8.6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142 CHAPTER 9 INSPECTION OF BUTT FUSION JOINTS IN POLYETHYLENE PIPELINES143 ix 9.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143 9.2 State-of-the-Art Butt-Fusion Inspection Methods . . . . . . . . . . . . . . . . . . 144 9.3 Structure of Materials in the Heat Affected Zone . . . . . . . . . . . . . . . . . . . 145 9.4 Reference Butt Fusion Specimens . . . . . . . . . . . . . . . . . . . . . . . . . . . 146 9.5 Micro CT Scanning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148 9.5.1 Pipe Sections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149 9.5.2 Dogbones . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152 9.6 Microwave Frequency Scanning . . . . . . . . . . . . . . . . . . . . . . . . . . . 154 9.6.1 Coaxial Cable . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156 9.6.1.1 Pipe Sections . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156 9.6.1.2 Dogbones . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158 9.6.2 Open-Ended Waveguide . . . . . . . . . . . . . . . . . . . . . . . . . . . 159 9.6.2.1 Pipe Sections . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159 9.6.2.2 Dogbones . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161 9.6.3 Split Ring Resonator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161 9.6.3.1 Pipe Sections . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163 9.6.3.2 Dogbones . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163 9.7 Capacitive Sensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164 9.8 Optical Transmission Scanning (OTS) . . . . . . . . . . . . . . . . . . . . . . . . 166 9.8.1 Pipe Sections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168 9.8.2 Dogbones . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168 9.9 Data Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174 9.9.1 Segmentation of HAZ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174 9.9.2 Pearson Correlation Coefficient . . . . . . . . . . . . . . . . . . . . . . . . 176 9.9.3 Distance Correlation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176 9.9.4 Data Analysis of CT Data . . . . . . . . . . . . . . . . . . . . . . . . . . . 177 9.9.4.1 Statistical Analysis of X-Ray Data . . . . . . . . . . . . . . . . . 177 9.9.4.2 Geometrical Features of the X-Ray Data . . . . . . . . . . . . . . 178 9.9.4.3 Direct Comparison with Ground Truth . . . . . . . . . . . . . . 180 9.9.5 OTS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 186 9.9.6 Data Analysis of the Electromagnetic Data . . . . . . . . . . . . . . . . . . 187 9.9.6.1 Coaxial Cable Data of the Second Batch . . . . . . . . . . . . . 188 9.9.6.2 Waveguide Data of the Second Batch . . . . . . . . . . . . . . . 194 9.9.6.3 Dual Channel Open-Ended Waveguide . . . . . . . . . . . . . . 199 9.9.6.4 Split Ring Resonator . . . . . . . . . . . . . . . . . . . . . . . . 202 9.9.7 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205 9.10 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 206 CHAPTER 10 DETECTION OF LEGACY CROSSBORES WITH CAPACITIVE SENSORS209 10.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209 10.2 Sensor Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209 10.2.1 Sensing System Design and Optimization . . . . . . . . . . . . . . . . . . 210 10.2.2 Multiphysics Numerical Modeling . . . . . . . . . . . . . . . . . . . . . . 211 10.3 Experimental Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212 10.4 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 214 x CHAPTER 11 CONCLUSION AND FUTURE WORK . . . . . . . . . . . . . . . . . . . 215 11.1 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215 11.2 Future work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 216 APPENDIX . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 218 BIBLIOGRAPHY . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 222 xi LIST OF TABLES Table 4.1: Sensor parameters for the simulation environment. . . . . . . . . . . . . . . . . 58 Table 4.2: Simulation parameters. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58 Table 4.3: Calibration parameters from autocalibration algorithm. . . . . . . . . . . . . . . 58 Table 4.4: Calibration parameters from autocalibration algorithm with noisy projection. . . 60 Table 4.5: Error in estimating the calibration pipe radius in millimeters. . . . . . . . . . . . 62 Table 4.6: Calibration parameters from the RPBC calibration procedure. . . . . . . . . . . 65 Table 4.7: Calibration parameters from the RPBC calibration procedure after applying RANSAC. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65 Table 4.8: Calibration parameters of autocalibration with single-ring. . . . . . . . . . . . . 66 Table 4.9: Calibration parameters of autocalibration with multiple rings. . . . . . . . . . . 67 Table 5.1: Positions and orientations of the projector and cameras in the simulation environment. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77 Table 5.2: Projector calibration results from Camera 1. . . . . . . . . . . . . . . . . . . . . 81 Table 5.3: Projector calibration results from Camera 2. . . . . . . . . . . . . . . . . . . . . 82 Table 5.4: Camera1 and Camera2 orientations in the 3D word. . . . . . . . . . . . . . . . . 82 Table 7.1: Change of permittivity, conductivity and sound speed with the change of micro-bubbles concentrations [1] . . . . . . . . . . . . . . . . . . . . . . . . . . 114 Table 8.1: Quantitative comparison of reconstruction with single and MPSL25 excitation . . 138 Table 8.2: Quantitative comparison of reconstruction with reduced acquisition time . . . . . 139 Table 8.3: Quantitative comparison of reconstruction with reduced excitation power . . . . 140 xii LIST OF FIGURES Figure 1.1: A overview of the inspection challenges in plastic pipelines and the proposed solutions for some these challenges. . . . . . . . . . . . . . . . . . . . . . . . . 4 Figure 2.1: Rectangular structured light system diagram. . . . . . . . . . . . . . . . . . . . 12 Figure 2.2: Pinhole camera model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 Figure 3.1: a) Schematic of the endoscopic SL sensor, b) Triangulation process in cylin- drical geometry. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 Figure 3.2: a) Color-coded sequence of the projected pattern, b) Multi-color multi-ring pattern created from the color-coded sequence, c) Fabricated endoscopic structured light sensor. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 Figure 3.3: Simulation results showing the scanning and reconstruction of a bend along the pipe: a) Simulated pipe, b) Sample simulated picture, c) Reconstructed point cloud from multiple simulated frames along the pipe, d) Final rendered surface of the pipe . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 Figure 3.4: a) Ray-plane intersection, b) An acute cone with a main axis that is not parallel to the 𝑧-axis. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 Figure 3.5: a) 3D data points generation, b) 3D data collected for a single ring, c) Esti- mated cone surface overlaid on the collected 3D data points. . . . . . . . . . . . 26 Figure 3.6: Image segmentation and 3D reconstruction, a) Acquired image by the sensor, b) Image converted to the polar domain, c) Second derivative along the radial direction, d) Extracted edges of the dark slits, e) Dark slits centroids overlaid on the color image, f) 3D reconstruction of the acquired image. . . . . . . . . . 27 Figure 3.7: Alignment correction of simulated data, a) Original data, b) Data after shifting and rotation, c) Ellipsoid surface fitted to the data, d) Data after applying alignment correction. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 Figure 3.8: Alignment correction of experimental data, a) Original data, b) Data after downsampling, c) Ellipsoid surface fitted to the data, d) Data after applying alignment correction. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 xiii Figure 3.9: Scanning of white PVC pipe, a) Internal image of the scanned pipe showing the rectangular defect, b) Sample image from the sensor showing the rectangular defect, c) Original reconstruction of the pipe surface, d) Reconstructed pipe surface after performing artifacts corrections. . . . . . . . . . . . . . . . . . . 35 Figure 3.10: Scanning of white PVC pipe, a) Final 3D rendering of the pipe surface, b) 3D point cloud of the pipe with the pipe radius as color map, c) Schematic of the SL sensor inside the pipe showing its blind spot, d) Cylindrical view of the pipe surface. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 Figure 3.11: The effect of scanning yellow MDPE pipes, a) RAW sensor image with automatic white balance, b) Raw sensor image with modified white balance, c) Color distribution of the stripes with automatic white balance, d) Color distribution of the stripes with modified white balance. . . . . . . . . . . . . . . 38 Figure 3.12: Scanning of 35-inch long pipe section, a) 3D rendering of the pipe surface, b) Cylindrical view of the pipe surface. . . . . . . . . . . . . . . . . . . . . . . 39 Figure 3.13: Small diameter sensor, a) Schematic, b) Picture of the fabricated sensor, c) Projected pattern. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 Figure 3.14: a) Projected pattern, b) captured pattern by camera, c) reconstruction from a single ring and d) rendered profile. . . . . . . . . . . . . . . . . . . . . . . . . 40 Figure 3.15: Small diameter sensor, a) Schematic, b) Picture of the fabricated sensor, c) Sample picture from scanning 4 inch white PVC pipe, d) 3D reconstruction of a cylindrical pipe surface. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 Figure 4.1: a) Projection pattern, b) LED ring. . . . . . . . . . . . . . . . . . . . . . . . . 45 Figure 4.2: Fabricated RGB-D sensor. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46 Figure 4.3: Synchronization with rolling shutter camera. . . . . . . . . . . . . . . . . . . . 46 Figure 4.4: a) Synchronization algorithm, b) Implemented timing diagram, c) Schematic explaining the synchronization circuit. . . . . . . . . . . . . . . . . . . . . . . 49 Figure 4.5: Concurrent view from inside the inspected pipe, a) LED assisted image, b) Structured light image. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 Figure 4.6: a) A diagram of the triangulation process in the SL sensor, b) The fitting process of the projection cone. . . . . . . . . . . . . . . . . . . . . . . . . . . 50 Figure 4.7: Autocalibration environment . . . . . . . . . . . . . . . . . . . . . . . . . . . 52 xiv Figure 4.8: a) 3D points from the intersection of the projected cone with the cylinder, b) Imaging of a single projected ring, c) Imaging of multicolor multiring pattern, d) Imaging of noisy multicolor multiring pattern. . . . . . . . . . . . . . . . . . 57 Figure 4.9: a) Calibration error with no projection noise, b) Calibration error with noisy projection. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59 Figure 4.10: a) Experimental projected pattern with highlighted artifacts, b) Simulated image of a multicolor multiring pattern with artifacts. . . . . . . . . . . . . . . 61 Figure 4.11: Effect of refraction through the connecting tube. . . . . . . . . . . . . . . . . . 64 Figure 4.12: Captured frames from inside a 6-inch PVC pipe with the sensor: a) Tilted down, b) Tilted up, c) Rotated to the right, d) Rotated to the left, e) In the middle of the pipe pointing in the forward direction. . . . . . . . . . . . . . . . 66 Figure 4.13: Reconstruction of a single calibration frame with parameters from a) Multir- ing autocalibration, b) Fine-tuning. . . . . . . . . . . . . . . . . . . . . . . . . 67 Figure 4.14: Reconstructed rings after orientation correction, a) Single frame reconstruc- tion, b) 40 frames combined reconstruction. . . . . . . . . . . . . . . . . . . . 68 Figure 4.15: Error in estimating the radius of the calibration pipe in millimeters. . . . . . . . 69 Figure 5.1: a) Schematic of the triangulation in an endoscopic SL sensor with sequential configuration, b) Schematic of the triangulation in an endoscopic SL sensor with side by side configuration. . . . . . . . . . . . . . . . . . . . . . . . . . . 73 Figure 5.2: The intersection of the camera ray with the projected cone, a) Concentric camera and projector, b) Side by side camera and projector. . . . . . . . . . . . 74 Figure 5.3: The intersection of the camera ray with the projected cone, a) Camera ray in the same plane as the line connecting the camera and the projector and intersecting at two points, b) Camera ray tangent to the projected cone and intersecting with it at a single point. . . . . . . . . . . . . . . . . . . . . . . . . 75 Figure 5.4: a) CAD model of the proposed sensor, b) Schematic of the simulated sensor. . . 76 Figure 5.5: Simulation results, a) 3D circle from the intersection of the projected cone with a plate at z=3, b) Image of the 3D circle from the first camera (C1), c) Image of the 3D circle from the second camera (C2). . . . . . . . . . . . . . . . 77 xv Figure 5.6: a) Reconstruction from CAM1 data, b) Reconstruction from CAM2 data, c) Merged reconstruction from the two cameras, d) Reconstruction from noisy CAM1 data, e) Reconstruction from noisy CAM2 data, f) Merged reconstruction from the noisy data, g) Delta of reconstruction from CAM1, h) Delta of reconstruction from CAM2, i) Merged reconstruction from the noisy data with delta=0.04 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77 Figure 5.7: Sensor fabrication, a) Front view, b) Side view, c) Front view of the sensor with the rotated cameras. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78 Figure 5.8: Raw stereo sensor images from, a ) 3-inch PVC pipe, b) 6-inch PVC pipe. . . . . 79 Figure 5.9: Reconstruction of a single frame after calibration, a) Reconstruction of the CAM1 data, b) Reconstruction of CAM2 data, c) Combined Reconstruction from the two cameras, d) Combined data after alignment correction, e) RMSE plot for the seven individual rings. . . . . . . . . . . . . . . . . . . . . . . . . 83 Figure 6.1: System diagram. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85 Figure 6.2: Simulated 3D scanning. a) I1, b) I2, c) I3, d) Phase unwrapping, e) 3D reconstruction. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88 Figure 6.3: Four scanning images with 𝜋/4 phase shift. . . . . . . . . . . . . . . . . . . . . 91 Figure 6.4: a) Simulated height profile, b) Reconstruction with the direct formulation, c) Optimized solution. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91 Figure 6.5: Different reconstruction results with signal to noise ratio equal to: a) 50, b) 10, c) 5. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91 Figure 6.6: a) Images sequence from the left camera, b) Images sequence from the right camera, c) Left DC image, d) Right DC image, e) Disparity map with block matching, f) Phase matching between the right and left cameras, g) Refined disparity map, h) Refined disparity map after interpolation (3D view). . . . . . 93 Figure 6.7: a) Nonlinear projected pattern, b) Sample image with nonlinear pattern pro- jection Stereo pair with nonlinear projection, c) The average frame from the right camera. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95 Figure 6.8: 3D reconstruction with nonlinear projection: a) Disparity map with block matching, b) Effect of nonuniform phase distribution, c) 2D representation of the interpolated refined surface, d) 3D representation of the interpolated refined surface. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95 xvi Figure 6.9: a) Rectangular geometry sensor and projection pattern, b) DC, c) AC, d) Ratio, and e) Disparity map. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97 Figure 6.10: Proposed cylindrical systems: a) Model A, b) Model B, c) Model C. . . . . . . 97 Figure 6.11: a) Schematic of model A inside the pipe, b) Triangulation of the system in cylindrical coordinates. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98 Figure 6.12: a) Projection of uniform pattern inside a pipe, b) Proposed phase nonlinear phase. 98 Figure 6.13: Simulated 3D reconstruction bump defect on the pipe internal surface. . . . . . 99 Figure 6.14: a) 6-inch PVC pipe with internal defects, b) Miniaturized sensor, c) Raw image from the sensor, d) 2D representation of the reconstructed disparity map, e) Photographic image of the first defect, f) Zoomed section of the disparity map showing the 2nd defect, g) Photographic image of the second defect. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100 Figure 6.15: Implementation of model B with a) Direct implementation, b) Backward- looking projector with a spherical mirror. . . . . . . . . . . . . . . . . . . . . . 101 Figure 6.16: a) Initial prototype of model C, b) Inspection results of a PVC by using the initial prototype of model C. . . . . . . . . . . . . . . . . . . . . . . . . . . . 102 Figure 7.1: A schematic explaining the thermoacoustic imaging process . . . . . . . . . . . 106 Figure 7.2: Simulation Geometry. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112 Figure 7.3: Thermoacoustic simulation of different micro-bubbles concentrations sus- pended in ethylene glycol (dashed lines) compared to experimental results (solid lines) from reference [1]. . . . . . . . . . . . . . . . . . . . . . . . . . . 113 Figure 7.4: Effect of transducer orientation on image reconstruction, a) The simulated geometry, b) Reconstruction with transducers aligned linearly along the top, c) Reconstruction with transducers aligned in L shape along the top and left boundaries, d) A comparison of the amplitude along the dotted line in b and c. . 116 Figure 7.5: Reconstruction with time reversal, a) Model geometry, b) EM losses distri- bution, c) Time reversal reconstruction, d) Image after neglecting negative values. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118 Figure 7.6: The effect of conductivity on TAI images, a) 1-D TAI signal variation with change of conductivity, b) Normalized EM losses distribution, c) Time rever- sal reconstruction, d) A 1D comparison between the EM losses and recon- structed TAI images at x = 0. . . . . . . . . . . . . . . . . . . . . . . . . . . . 119 xvii Figure 7.7: Electromagnetic losses inside the imaged sample over wide range of 𝜎, a) Simulation(dashed lines), b) Analytical solution (solid lines). . . . . . . . . . . 120 Figure 7.8: The effect of EM pulse width on the reconstructed image, a) Model geometry, b) Image reconstruction with pulse width = 4 µs. . . . . . . . . . . . . . . . . 121 Figure 7.9: Electric field distribution near the antenna, a) Horn antenna, b) Open ended waveguide. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122 Figure 8.1: a) Zero stuffed MPSL25 sequence; b) Mismatched filter for zero stuffed MPSL25; c) Correlation results of MPSL25 with the mismatched filter. . . . . . 128 Figure 8.2: a) A single pulse coded with bipolar Barker13 sequence. b) The results of correlating a signal coded with bipolar Barker13 with a its matched filter. c) A single pulse coded with uni-polar Barker13 sequence. d) The results of correlating a signal coded with a uni-polar Barker13 with its matched filter. . . . 129 Figure 8.3: a) First waveform of the Manchester coded complementary sequence; b) Second waveform of the Manchester coded complementary sequence; c) Summation of the correlation function. . . . . . . . . . . . . . . . . . . . . . . 131 Figure 8.4: Effect of interpolation of the measurement data. a) The simulated geometry with 50 transducers evenly distributed on a circle. b) The reconstruction without interpolation of the measurement data. c) The reconstruction with the interpolation of the measurement data. d) The detailed comparison at the line shown in a). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132 Figure 8.5: a) Schematic of the experimental setup; b) Photographic picture of the exper- imental setup. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133 Figure 8.6: Experimental system validation. a) Picture of the scanned sample; b) Cross sectional CT image of the scanned sample; c) Reconstructed TAI image without interpolation; d) Reconstructed TAI image with interpolation. . . . . . 134 Figure 8.7: 1-D TAI signal with single pulse excitation, a) Cheese sample with agar core, b) 1D signal acquired at 3 kW excitation and 2048 averages, c) 1D signal acquired at 3 kW excitation and 2 averages. . . . . . . . . . . . . . . . . . . . . 136 Figure 8.8: 1-D TAI signal with MPSL25 excitation and a mismatched filter, a) Raw signal acquired with 2048 averages, b) Filtered signal with 2048 averages, c) Filtered signal with 2 averages. . . . . . . . . . . . . . . . . . . . . . . . . . . 136 xviii Figure 8.9: 1-D TAI signal with Manchester coded complementary pairs excitation, a) First raw signal with 2048 averages, b) Second raw signal with 2048 averages, c) Summation of the correlation function with 2048 averages, d) Summation of the correlation function with 2 averages. . . . . . . . . . . . . . . . . . . . . 136 Figure 8.10: 2-D TAI images of the inspected object: a) Reconstruction with single pulse excitation, b) Reconstruction with MPSL25 excitation, c) 1-D comparison of the reconstructed images along a horizontal line that passes through the center of the agar core, d) Reference image. . . . . . . . . . . . . . . . . . . . 138 Figure 8.11: Comparison of reconstruction with reduced acquisition time with 0.5 𝜇s at 3 kW and 2 averages: a) Single pulse, b) MPSL25, c) 1-D comparison of the reconstructed image at the center of the defect. . . . . . . . . . . . . . . . . . . 139 Figure 8.12: Comparison of reconstruction with reduced peak power excitation with 0.5 𝜇s pulse width at 1.5 kW: a) Single pulse, b) MPSL25, c) 1-D comparison of the reconstructed image at the center of the defect. . . . . . . . . . . . . . . . . 140 Figure 8.13: a) Structure for the agar sample, b) TAI image with 0.5 𝜇s single pulse and 3 kW excitation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141 Figure 8.14: Comparison of reconstruction with different pulse widths: a) 2 𝜇s single pulse with 2 kW excitation, b) 1 𝜇s single pulse with 2 kW excitation , c) 0.5 𝜇s single pulse with 2 kW excitation, d) 0.5 𝜇s MPSL-25 coded pulses with 2 kW excitation, e) 1-D comparison of the reconstructed images along a line that passes through both of the wires. . . . . . . . . . . . . . . . . . . . . . . . 141 Figure 9.1: Ultrasound-based inspection method, a) Time of flight diffraction [2], b) Phased array UT [3], c) Chord UT [4], d) EM-based scanner [5]. . . . . . . . . 145 Figure 9.2: a) Temperature distribution during the final stage of welding, b) Expected microstructures after completion of the final stage of welding [6]. . . . . . . . . 146 Figure 9.3: Dogbones and their corresponding locations. . . . . . . . . . . . . . . . . . . . 146 Figure 9.4: a) Test samples , b) Test samples specifications. . . . . . . . . . . . . . . . . . 147 Figure 9.5: Perkin Elmer Quantum GX Micro CT Imaging System [7]. . . . . . . . . . . . 148 Figure 9.6: Scanning of reference plastic cap, a) Picture of the tested sample, b) Two Micro CT sliced with marked defects. . . . . . . . . . . . . . . . . . . . . . . . 149 Figure 9.7: Conversion to polar domain. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150 Figure 9.8: Irregularities from the CT machine, a) Single CT slice, b) 1-D plot at 𝑥 = 38 mm.151 xix Figure 9.9: Irregularities correction with curve fitting, a) Orginal CT average data, b) Corrected CT data. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151 Figure 9.10: a) data of the first and second batch, b) CT data of the third batch. . . . . . . . . 152 Figure 9.11: Micro CT scanning of Non-fused samples for angles: 0, 45, 90, 135, 180, 225, 270, 315 from top to bottom. . . . . . . . . . . . . . . . . . . . . . . . . . 153 Figure 9.12: a) Micro CT scanning of good samples for angles: 0, 45, 90, 135, 180, 225, 270, 315 from top to bottom, b) Micro CT scanning of good samples for angles: 67, 112, 157, 247, 292,337 from top to bottom. . . . . . . . . . . . . . 153 Figure 9.13: a) Micro CT scanning of bad samples for angles: 0, 45, 90, 135, 180, 225, 270, 315 from top to bottom, b) Micro CT scanning of bad samples for angles: 67, 112, 157, 247, 292,337 from top to bottom. . . . . . . . . . . . . . . . . . . 153 Figure 9.14: a) Field regions around an antenna [8], b) A schematic of a near field inspec- tion system. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155 Figure 9.15: Microwave Scanner for Butt Fusion Pipelines with metamaterial sensor at- tached to it. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155 Figure 9.16: Scanning results of pipe sections with coaxial cable probe. . . . . . . . . . . . 156 Figure 9.17: Effect of subtracting the mean value to compensate for the fluctuation in lift off distance for sample 57AB a) Original data, b) Data after post-processing. . . 157 Figure 9.18: Coaxial cable scanning results of the pipe sections after post-processing for first and second batches with intact beads. . . . . . . . . . . . . . . . . . . . . 158 Figure 9.19: Coaxial cable scanning results of the second batch with removed beads after post-processing. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159 Figure 9.20: Coaxial cable scanning results with intact beads after post-processing for the third batch. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160 Figure 9.21: Scanning of the dogbone samples with coaxial cable probe. . . . . . . . . . . . 161 Figure 9.22: Open-ended waveguide scanning results for the pipe sections after post-processing.162 Figure 9.23: Open-ended waveguide scanning results with removed beads after post- processing for second batch. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163 Figure 9.24: Open-ended waveguide scanning results for the pipe sections after post- processing third batch. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164 xx Figure 9.25: Microwave setup with added amplitude reflection measurement. . . . . . . . . . 165 Figure 9.26: Dual channel open-ended waveguide scanning results for the pipe sections third batch for samples (57AB, 57CD,59AB, 59CD). . . . . . . . . . . . . . . . 166 Figure 9.27: Dual channel open-ended waveguide scanning results for the pipe sections third batch for samples (61AB, 61CD,63AB, 63CD). . . . . . . . . . . . . . . . 167 Figure 9.28: Split ring resonator sensor. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167 Figure 9.29: SRR scanning results for the pipe sections of the third batch after post-processing.168 Figure 9.30: Three-points measurement with a metamaterial sensor, a) Metamaterial sen- sor, b) Sensor measurement under a wide range of frequencies. . . . . . . . . . 169 Figure 9.31: Capacitive sensor schematic. . . . . . . . . . . . . . . . . . . . . . . . . . . . 169 Figure 9.32: Capacitive sensor, a) Capacitive sensor prototype, b) Scanning results of 56AB. 169 Figure 9.33: Three-points measurement with a capacitive sensor, a) Capacitive sensor prototype, b) Sensor measurement under a wide range of frequencies. . . . . . . 170 Figure 9.34: Optical transmission scanning schematic. . . . . . . . . . . . . . . . . . . . . . 170 Figure 9.35: OTS scanning for samples B0, G0, B45, G45 from left to right, a) Picture of the scanned samples, b) OTS scanning results. . . . . . . . . . . . . . . . . . . 171 Figure 9.36: 1D comparison of the intensity of the laser beam at the middle of the scanned samples for B0, G0, B45, G45. . . . . . . . . . . . . . . . . . . . . . . . . . . 171 Figure 9.37: OTS scanning of dogbones for G0, B0, G45, B45, G90, B90, G135 from left to right. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171 Figure 9.38: A 1D comparison of 2.25mm average at the center of each dogbone for G0, B0, G45, B45, G90, B90, G135. . . . . . . . . . . . . . . . . . . . . . . . . . 172 Figure 9.39: OTS scanning of dogbones for G180, B180, G225, B225, G270, B270, B315. . 172 Figure 9.40: A 1D comparison of 2.25mm average at the center of each dogbone for G180, B180, G225, D225, G270, B270, B315. . . . . . . . . . . . . . . . . . . . . . 172 Figure 9.41: OTS scanning of dogbones for G67, B67, G112, B112, G157, B157, G247, B247.172 Figure 9.42: A 1D comparison of 2.25mm average at the center of each dogbone for G67, B67, G112, D112, G157, B157, G247, B247. . . . . . . . . . . . . . . . . . . . 173 xxi Figure 9.43: OTS scanning of dogbones for G247, B247, G292, D292, G337, B337. . . . . . 173 Figure 9.44: A 1D comparison of 2.25mm average at the center of each dogbone for G247, B247, G292, D292, G337 and B337. . . . . . . . . . . . . . . . . . . . . . . . 173 Figure 9.45: OTS scanning of the non-fused dogbone samples for angles 0, 45, 90, 135, 180, 225, 270 and 315. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173 Figure 9.46: A 1D comparison of 2.25mm average at the center of each nonfused dogbone for angles 0, 45,90, 135, 180, 225, 270 and 315. . . . . . . . . . . . . . . . . . 174 Figure 9.47: a) Single slice of the polar CT data, b) Thresholded mask that defines the pipe edges. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174 Figure 9.48: A schematic explaining the region used for averaging. . . . . . . . . . . . . . . 175 Figure 9.49: a) Averaged image, b) Averaged imaged after correction. . . . . . . . . . . . . . 175 Figure 9.50: a) Histogram of the averaged image, b) Final overlay mask. . . . . . . . . . . . 176 Figure 9.51: a) Original EM data, b) Overlay of the CT image over the EM data. . . . . . . . 176 Figure 9.52: Statistical analysis of the entire pipe sections, a) Moving mean, b) Moving standard deviation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177 Figure 9.53: Statistical analysis of the entire pipe sections for the third batch, a) Moving mean, b) Moving standard deviation. . . . . . . . . . . . . . . . . . . . . . . . 178 Figure 9.54: X-ray image, a) Non-fused dogbone, b) Good fused dogbone, c) Bad fused dogbone. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179 Figure 9.55: a) Width and height of HAZ of the good samples, b) Width and height of HAZ of the bad samples. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179 Figure 9.56: Scatter plot of the width and thickness of the HAZ with initial labels, a) With predicted labels, b) With actual labels. . . . . . . . . . . . . . . . . . . . . . . 180 Figure 9.57: Statistical analysis of CT results for the second batch, a) Overlayed 2D scan, b) 1D comparison of the ground truth with the avergae values of the HAZ, c) Correlation results with using shifted data. . . . . . . . . . . . . . . . . . . . . 182 Figure 9.58: Statistical analysis of CT results for the third batch, a) Overlayed 2D scan, b) 1D comparison of the ground truth with the avergae values of the HAZ, c) Correlation results with using shifted data. . . . . . . . . . . . . . . . . . . . . 183 xxii Figure 9.59: Statistical analysis to check the effect of the beads width on the joint quality for the second batch, a) 1D comparison of the ground truth with the the width of the beads, b) Correlation results with using shifted data. . . . . . . . . . . . 184 Figure 9.60: Statistical analysis to check the effect of the beads width on the joint quality for the second batch, a) 1D comparison of the ground truth with the the width of the beads, b) Correlation results with using shifted data. . . . . . . . . . . . 185 Figure 9.61: Statistical analysis to check the effect of the beads width on the joint quality for the third batch, a) 1D comparison of the ground truth with the the width of the beads, b) Correlation results with using shifted data. . . . . . . . . . . . 186 Figure 9.62: OTS data analysis, a) Direct plotting of the OTS results, b) OTS results with corrected mean value. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187 Figure 9.63: OTS data analysis, a) Smoothed gradient, b) 2mm moving standard deviation. . 188 Figure 9.64: Statistical analysis coaxial cable results for the second batch with beads, a) Overlayed 2D scan, b) Correlation results with using original data, c) Correlation results with using shifted data. . . . . . . . . . . . . . . . . . . . . 190 Figure 9.65: Statistical analysis coaxial cable results for the second batch with removed beads, a) Overlayed 2D scan, b) Correlation results with using original data, c) Correlation results with using shifted data. . . . . . . . . . . . . . . . . . . . 191 Figure 9.66: Statistical analysis coaxial cable results for the third batch, a) Overlayed 2D scan, b) Correlation results with using original data, c) Correlation results with using shifted data. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192 Figure 9.67: Statistical analysis coaxial cable results for the third batch, a) Overlayed 2D scan, b) Correlation results with using original data, c) Correlation results with using shifted data. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193 Figure 9.68: Statistical analysis open-ended waveguide results for the second batch with beads, a) Overlayed 2D scan, b) Correlation results with using original data, c) Correlation results with using shifted data. . . . . . . . . . . . . . . . . . . . 195 Figure 9.69: Statistical analysis open-ended waveguide results for the second batch with no beads, a) Overlayed 2D scan, b) Correlation results with using original data, c) Correlation results with using shifted data. . . . . . . . . . . . . . . . . 196 Figure 9.70: Statistical analysis open-ended waveguide results for the third batch, a) Over- layed 2D scan, b) Correlation results with using original data, c) Correlation results with using shifted data. . . . . . . . . . . . . . . . . . . . . . . . . . . 197 xxiii Figure 9.71: Statistical analysis open-ended waveguide results for the third batch, a) Over- layed 2D scan, b) Correlation results with using original data, c) Correlation results with using shifted data. . . . . . . . . . . . . . . . . . . . . . . . . . . 198 Figure 9.72: Statistical analysis of the amplitude of the dual channel waveguide measure- ment results for the third batch, a) Overlayed 2D scan, b) Correlation results with using original data, c) Correlation results with using shifted data. . . . . . 199 Figure 9.73: Statistical analysis of the amplitude of the dual channel waveguide measure- ment results for the third batch, a) Overlayed 2D scan, b) Correlation results with using original data, c) Correlation results with using shifted data. . . . . . 200 Figure 9.74: Statistical analysis of the phase of the dual channel waveguide measurement results for the third batch, a) Overlayed 2D scan, b) Correlation results with using original data, b) Correlation results with using shifted data. . . . . . . . . 201 Figure 9.75: Statistical analysis of the phase of the dual channel waveguide measurement results for the third batch, a) Overlayed 2D scan, b) Correlation results with using original data, b) Correlation results with using shifted data. . . . . . . . . 202 Figure 9.76: Statisstical analysis of SRR results for the third batch, a) Overlayed 2D scan, b) Correlation results with using original data, c) Correlation results with using shifted data. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203 Figure 9.77: Statisstical analysis of SRR results for the third batch, a) Overlayed 2D scan, b) Correlation results with using original data, c) Correlation results with using shifted data. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 204 Figure 10.1: a) Gas pipeline intersection with a sewer pipe (cross bore) [9], b) proposed cross bore detection procedure launched inside the gas pipelines. . . . . . . . . 209 Figure 10.2: Sensor schematic integrating capacitive and optical sensors. . . . . . . . . . . . 210 Figure 10.3: Simulation results of the E field distribution generated by the proposed multi- channel EM sensor moving inside the MDPE pipe. . . . . . . . . . . . . . . . . 211 Figure 10.4: a) Preliminary test setup (side view), b) EM sensor inside the preliminary test setup (Top view). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212 Figure 10.5: Detection results of clear differentiation in soil and air using the proposed EM sensing technology inside the plastic pipe. . . . . . . . . . . . . . . . . . . 213 Figure 10.6: GTI testing setup. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213 xxiv LIST OF ALGORITHMS Algorithm 1 Automatic calibration algorithm with a single ring . . . . . . . . . . . . 56 Algorithm 2 Automatic calibration algorithm with multiple rings . . . . . . . . . . . 59 Algorithm 3 Fine-tuning algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . 62 xxv KEY TO ABBREVIATIONS CMOS Complementary metal oxide semiconductor DSLR Digital single-lens reflex fps Frames per second LED Light emitting diode OTS Optical transmission scanning PHM Pin hole model PMP Phase measurement profilometry PnP Perspective-n-Point NDE Nondestructive evaluation NDT Nondestructive testing NMI Near field microwave imaging RMSE Root mean square error RMS Root mean square RGB Red, Green, Blue SL Structured light TAI Thermoacoustic imaging UT Ultrasound xxvi CHAPTER 1 INTRODUCTION Nondestructive evaluation (NDE) is a field of science with a wide range of applications where various methodologies are used to guarantee the quality of new products, maintain the safety of working equipment, or inspection of aging critical infrastructures. It is applied in fields where material failure could result in a high possibility of hazard or economic losses. NDE deals with testing and analyzing the material properties of objects without damaging or altering their physical content. In medicine, similar NDE methodologies are used to diagnose patients with bone fractures, detect tumors, or monitor a physiological process that is undergoing inside the body, like pregnancy examination using ultrasonic imaging. NDE depends on the responses of the material to electric, magnetic, electromagnetic, mechanical, or chemical interactions during the testing process or depends on a combination of multiple interactions in what is known as hybrid imaging methodologies. Some of the well-known techniques are magnetic particle testing [10], optical inspection [11], radiography testing [12], microwave and millimeter wave testing [13], eddy current testing [14], magnetic flux leakage testing [15], ultrasonic testing [16], etc. All of these methods should leave no harmful effects on the tested object if the tests are conducted under proper conditions or sometimes the effects are negligible, like the effect of ionization radiation from X-ray on biological cells. 1.1 Motivation Plastic pipelines possess unique material properties that provide unique design capabilities. They provide corrosion and chemical resistance, lightweight, low cost, and design flexibility. Therefore, they are being used in a wide range of applications, and that includes but is not limited to municipal water and sewer systems, radiant heating and residential plumbing, and the water lines in nuclear reactors. Another major application of plastic pipelines is the usage of medium density polyethylene (MDPE) in the construction of the majority of natural gas distribution networks. 1 Underground pipelines play an important role in the gathering, transmission, and distribution of natural gas to residential and commercial areas in the United States. US energy information administration estimates that the pipeline network transported 28.3 trillion cubic feet of natural gas to the customers across the US mainland in 2019 [17]. Aging plays an important role in degrading the reliability of these networks and can lead to catastrophic failures that risk the safety of operation [18]. Failure of these systems can also be related to excavation damage, improper installation practices, and excessive stresses due to soil conditions [19]. The following subsection will discuss some of the threats that are affecting the plastic pipelines in general, like the slow crack growth and cold fusion, and then move to crossbores which are specific to gas pipelines networks. 1.1.1 Slow Crack Growth The majority of premature failures in plastic pipelines are due to brittle and slow crack growth (SCG) through the pipe wall [20]. Major drivers of premature failure due to slow crack growth can be related to bending stresses due to tight bend radii [20], impingement, internal defects, welded joints, and fittings [21]. Identification and classification of the current vintage pipeline inner wall damage precursors are of critical importance, but one of the challenges of inspecting MDPE pipelines is the dielectric nature of their wall material. Most of the mature pipe inspection techniques have been designed and well developed to inspect metal pipelines and cannot be used to inspect polymer-based gas pipelines. Internal inspection techniques such as Eddy Current Magnetic Flux Leakage (MFL) sensors require either highly conductive or ferromagnetic pipe walls. Ultrasound techniques require a coupling medium with the inner pipe wall for an efficient inspection, which is hard to achieve in the case of natural gas distribution networks. Hybrid methods like electromagnetic acoustic transducers (EMAT) require the existence of both ferromagnetic walls and an acoustic coupling medium. 2 1.1.2 Cold Fusion in Butt Fusion Joints in Plastic Pipelines Pipe joints are used frequently to create long pipe sections, with butt-fusion being the most commonly used method. Butt-fusion involves the simultaneous heating of the ends of two pipes which are to be joined until a molten state is attained on each contact surface. The two surfaces are then brought together under controlled pressure for a specific cooling time, and a homogeneous fusion joint is formed. The quality of a joint is affected by many factors that include, but are not limited to, the welding temperature, the melting point of the plastic, contamination, and welding pressure. Concerns arise from the fact that there are currently no reliable NDE methods to evaluate the integrity of those joints [22]. The current weld inspection procedures and guidelines depend mainly on visual inspection. Although visual inspection has proven to be a successful method for the inspection of PE butt-fusion welds, several limitations remain. There are cases where the joints pass all the visual inspection guidelines and still fail due to contamination or low tie molecule density. There is little evidence that current NDE methods can reliably detect the low bonding strength across the joint interface. Results in this field and reported research efforts typically focus on the detection of artificially produced volumetric defects to identify poor fusions; however, a recent study has indicated that there is no evidence suggesting these volumetric defects influence the strength of the joints [23]. 1.1.3 Cross Bores The high efficiency and eco-friendly trenchless installations are widely used to build an extensive network of gas pipelines. However, unintentional drilling of new pipelines through legacy utility using trenchless drilling technologies creates an intersection named cross-bore. A crossbore in a gas pipeline happens when a gas pipeline passes through a sewer pipe during the drilling process. Cross-bores involving sewer lateral and gas pipes, with an estimated average rate of 0.4 per mile of gas pipeline, are significant threats to the safety of the general public and utility workers, with several incidents being reported due to the cutting through of the gas pipe by drain cleaners. Recognizing the severity of the cross-bore hazard, various technologies such as cameras, acoustic 3 pipe locators, and ground-penetrating radar are used to detect cross-bore but with limited efficiency. The LED-assisted camera is the primary technology that is used to detect and identifies cross-bore by deploying it into the sewer pipe; however, the camera is limited by the access to blocked and water and mud-filled sewer pipes. • Impingement • Cold fusion in NMI RGB-D • Bending butt fusion joints imaging • Squeeze-off • Brittle • Internal defects • Delamination • Permeation Surface Subsurface defects defects Emerging External materials environment • CFRP • GFRP • Thermoplastic • Cross bores Capacitive TAI Composites sensing • Wire mesh reinforcement Figure 1.1: A overview of the inspection challenges in plastic pipelines and the proposed solutions for some these challenges. 1.2 Challenges and Proposed Solutions This study follows a multi-modal approach to address some of the inspection challenges that affect the safety of the pipeline network. Figure 1.1 gives an overview of the challenges and the proposed solutions for some of the challenges. The inspection challenges to the pipeline network can be divided into four categories: • Surface defects: damages or deformations to the internal surface of the pipe wall that is caused by external forces, manufacturing defects, or incorrect installation practices. Examples of this type of defect can be seen in impingement, bending, squeeze-off, and other internal surface defects where the defects can be identified visually by monitoring color changes, wall loss, or deformations to the cylindrical shape of the pipe. Optical techniques are less affected by 4 the material type when compared to conventional NDE techniques, do not require a coupling medium, can be miniaturized to detect defects in tight spaces, and provide high-resolution real-time area inspection. In this study, RGB-D imaging is mainly used to provide detailed information about the color, reflectivity, and 3D profile of the inspected surfaces. • Subsurface defects: subsurface damages or changes in the material properties of the pipe wall due to manufacturing defects or exposure to external factors. Examples of this type of defect can be seen in cold fusion in butt fusion joints, manufacturing defects, brittle, delamination, and permeation of hydrocarbons. Polyethylene material has low dielectric losses; therefore, electromagnetic waves can easily penetrate through the pipe walls. In this study, an investigative study was conducted to test the probability of detecting cold fusion in butt fusion joints by EM-based techniques. • External threats: threats from external factors like third-party digging and improper instal- lation practices. Examples of this type of defect can be seen in cross bores with sewer pipes and the existence of hydrocarbons in the vicinity of polyethylene pipes. In this study, we are focusing on cross bores where a capacitive sensor is developed to classify surrounding materials around the pipe. • Emerging materials: new materials that are being adopted to the pipeline network due to their unique mechanical and chemical properties. Examples of these materials can be seen in carbon fiber and glass fiber composites and their different combinations with polyethylene. In this study, we are exploring the usage of hybrid imaging techniques like thermoacoustic imaging as a potential candidate to inspect glass fiber composites. 5 1.3 Scope of work 1.3.1 Optical RGB-D imaging The contribution of this study can be found in three main areas, and they are: 1) Design of the SL sensors and 3D reconstruction algorithms needed to reconstruct the data, 2) Algorithms development to register the 3D data in a pipe environment, and 3) Development of an automatic calibration algorithm to calibrate the sensor once it is placed inside the pipe. 1.3.1.1 SL Sensor Miniaturization and Electronic Point Cloud Stablization This task presents a miniaturized structured light-based three-dimensional (3D) imaging sensor for the inspection of internal walls of plastic pipelines. The sensor is an 18-mm diameter optical inspection device that can be inserted inside the pipe to provide the operator with a 3D map of the internal surface of the inspected pipe. The introduction of the 3D map simplifies the identification of damages like deformations and material loss and leads to a better evaluation of the damage severity when compared to LED-assisted cameras. The sensor is also accompanied by an embedded electronic stabilization algorithm to reduce the effect of misalignment when the sensor is attached to a scanning platform, and to facilitate the registration of consecutive frames from the sensor. The algorithm exploits the geometry constraints provided by the cylindrical shape of the pipe to estimate the orientation and position of the sensor with each collected frame. A detailed description of the sensor design and the associated stabilization algorithm is given in Chapter 3. 1.3.1.2 RGB-D Inspection with Automatic Calibration Capability This task presents two solutions to address the limitations of structured light sensors in pipelines’ internal surface inspection. The first solution is an endoscopic sensor with a concurrent RGB-D inspection capability that was developed with minimal modification to the structured light sensor. The second solution is a novel algorithm to automatically calibrate the projection module and estimate the stereo parameters between the camera and the projector. The calibration algorithm 6 exploits the cylindrical nature of the inspected pipe to create a set of geometric constraints and automatically calibrate the sensor without the need for reference calibration points. Experimental and simulation results showed that the algorithm could successfully estimate the projector’s intrinsic and extrinsic parameters by simply acquiring the data inside a cylindrical pipe with a known diameter. The proposed algorithm highly reduces the data collection time for the calibration (only 53 seconds), improves the accuracy, and simplifies the calibration process. A detailed description of the sensor design and the associated autocalibration algorithm is given in Chapter 4. 1.3.1.3 Stereoscopic Endoscopic SL Sensor for 360-degree profiling In this task, we are introducing a stereoscopic SL sensor to mitigate the effect of wires and glass tubes on the sensor performance. The setup design employs two cameras that are placed on the side of the projector in a perpendicular setup to guarantee that at least one camera ray is not parallel to the projected cone surface at any time when using a circular pattern. The result is a 360-degree unobstructed inspection, reduced sensor length, improved rigidity, and more flexibility when adapting the sensor to different pipe diameters. In this study, we also present the mathematical model for reconstruction, calibration, and the algorithm for the reconstruction of the 3D surface. A detailed description of the sensor is given in Chapter 5. 1.3.1.4 Movement Enhanced Phase Measurement SL Sensor In this task, we are presenting a phase measurement profilometry endoscopic SL sensor. The main advantage of this sensor is that it provides a full field of view reconstruction and exploits the sensor movement to improve the reconstruction quality. The sensor projects a static sinusoidal pattern on the inspected pipe surface while exploiting the continuous movement of the scanning platform to acquire the 3D profile of the inspected area. Therefore, the system doesn’t require a complex projection module or very fast acquisition speeds. The main design novelty is the use of the stereo constraints to reduce the effect of the height-dependent motion-induced phase shifting. A detailed description of the sensor design and the associated reconstruction algorithm is given in Chapter 6. 7 1.3.2 Thermoacosutic Imaging In this task, we present coded pulse excitation to improve the SNR in pulsed TAI systems. Coded pulses are used to excite the imaged sample, and then the received pressure signal is correlated with a template that is related to the power profile of the excitation pulse. The proposed approach does not require the use of linear amplifiers; therefore, it can be easily applied to pulsed TAI systems without major modification. The approach only requires control over the timing of the pulse excitation. The proposed method highly enhances the received signal SNR when compared with the pulsed TAI system with the same peak power and number of averages. A pulsed TAI imaging system was built to test the performance of the proposed approach. The results show that the proposed approach can be used to reduce the acquisition time and reduce the microwave source peak power while still having adequate SNR. A detailed description of thermoacoustic imaging with simulations is given in Chapter 7, and the imaging system with noncoherent compression is described in Chapter 8. 1.3.3 An Investigative Study to Detect Cold Fusion in Butt Fusion Joints To address the problem of the detection of weak butt fusion joints, an investigative study was conducted to test the capability of current NDE techniques to identify the existence of weak fusions in butt fusion joints with a focus on near-field microwave imaging. Multiple samples with different joint qualities are created under a controlled environment to serve as a reference during the development and evaluation of the sensor. Multiple NDE methods are tested to find the best candidate to detect weak fusions, and that includes: a) X-ray CT to measure the change in the material density in the heat-affected zone and to serve as a reference for other NDE techniques, b) Near field microwave imaging: measure the change in dielectric properties in the heat-affected zone, c) Optical transmission scanning: measure the attenuation of light when passing through the HAZ. A detailed description of the study and the results are given in Chapter 9. 8 1.3.4 A Capacitive Inspection Sensor for the Detection of Legacy Cross Bores In this task, we developed a miniaturized nondestructive evaluation (NDE) tool that can detect and locate legacy cross bores for underground polyethylene gas pipelines without passing through sewer pipes. The tool is a capacitive-based sensing system that is designed to be sent through the gas pipeline, classify the surrounding materials around the pipe, and differentiate them from actual cross bores. Cross-bores and surrounding materials of pipes can be identified because the materials in the sewer pipe, such as air and water, differ in the dielectric properties from the soil. A detailed description of the proposed sensor and the initial results are given in Chapter 10. 9 CHAPTER 2 AN INTRODUCTION TO OPTICAL INSPECTION AND 3D PROFILING Optical inspection is one of the first nondestructive evaluation methods where the visual inspection was performed with the naked eye to check for potential defects in the inspected object. With the advancement of digital photography and camera manufacturing, digital cameras with automated detection algorithms became the tool of choice to detect defects in the devices on the manufacturing lines, food processing plants, and visual inspection of tight spaces. Optical inspection techniques are less affected by the material type when compared to conventional NDE techniques, do not require a coupling medium, and can be miniaturized with careful engineering to have a small form factor to be inserted in tight spaces. Optical inspection systems can be divided into 2D and 3D inspection systems. 2D inspection systems use LED-assisted cameras to produce 2D optical images with information about changes in the color and reflectivity of the inspected surface [24]. 2D inspection is also performed in other nonvisible light bands like infrared, near-infrared, and ultraviolet. LED-assisted cameras can be used as a damage detection method, but they cannot provide 3D information about the severity of the damage (size and depth of the defect). To evaluate the severity of the damage, 3D inspection and profiling systems are needed. 3D inspection methods include but are not limited to structure from motion [25, 26], stereo vision [27, 28], structured light [29] and laser scanning systems [30, 31]. This chapter will cover the main techniques used in optical inspection and provide an introduction to some of the principles of digital photography. 2.1 3D Profiling 3D profiling sensors are being adopted in a wide range of applications like reverse engineering, medical applications, 3D face acquisition, welding inspection, robotic navigation, and product quality control [32, 33, 34, 35, 36]. For industrial applications, They provide a 3D profile of the inspected object that provides the ability to accurately evaluate the damage size, confirm the dimensions of manufactured objects, or provide measurements to accurately manufacture a new 10 part. There are several methods to extract the depth of the inspected object, and that includes but is not limited to stereo vision, structure from motion, photometric stereo, structured light, LIDAR, time of flight cameras, and recently some machine learning-based techniques. The stereo-based 3D acquisition is one of the first methods used to reconstruct the shape of 3D objects since it mimics the human approach to perceiving depth from 2D images. These systems use two cameras separated by a predefined distance and share a similar field of view. The depth information is extracted by triangulating the disparity in the position of the observed object in the two camera frames. The existence of unique surface features is crucial for the reconstruction process; therefore, stereo-based systems fail when there is a lack of unique, comparable features [37] which is the case for the smooth internal surface of plastic pipelines. Similar to stereo vision is the structure from motion (or multiple view geometry), where a series of projected 2D images of the scanned object are acquired at different angles and/or distances, and then common features are extracted [38] The extracted points are reprojected to 3D space, and the reconstruction is performed according to the theorem that the coordinates of four non planner points can be calculated from three orthographic views [39, 40]. Since structure from motion needs unique surface features for comparison, this method suffers from similar problems encountered with stereo vision. Photometric stereo depends on calculating the normal to a surface from a series of images of a static object under different illumination angles and uses the correspondence to reconstruct the scene [41]. A special case of this technique is when analyzing the shades in a single image to create a surface reconstruction [42]. These requirements are difficult to achieve during the internal inspection of pipes due to poor lighting and tight dimensions inside the inspected pipe. LiDAR and time of flight cameras send an electromagnetic wave toward the scanned target and measure the phase changes or time of flight of the reflected signal to calculate the distance from the scanner to the scanned target as [43]. These systems have insufficient spatial resolution and a more complex system setup when compared to other 3D profiling techniques when used in tight dimensions. Another recent development in the field of depth perception is to use of machine learning to reconstruct the imaged scene directly from a single shot in a similar manner to what humans can perceive depth [44]. These methods are 11 promising, but they are not robust and are still in the early stage of development. A structured light (SL) sensor is a modified stereo sensor with one of its cameras exchanged with a light projector that paints the scanned object with a textured coded pattern [32]. The projected pattern is used to provide the surface features required for the stereo matching process; therefore, SL systems have the ability to scan smooth featureless surfaces. This property is important for the gas pipelines inspection systems because gas pipelines have smooth, featureless internal walls. The general design of a structured light sensor consists of a projection module that projects a highly textured pattern and a camera that captures the deformations in the projected pattern [32]. The camera and projector are separated by a predefined distance (d) and are placed in an orientation that enables them to share a similar baseline to simplify the triangulation process. An illustration of the general system set up in a rectangular geometry is shown in Figure 2.1. Figure 2.1: Rectangular structured light system diagram. By using triangulation, the distance (ℎ) of the scanned object from the projector-camera setup can be calculated as follows: sin(𝜃 1 ) sin(𝜃 2 ) ℎ=𝑑 (2.1) sin(𝜃 1 + 𝜃 2 ) Structured light sensors use either spatial or temporal coding to ensure the uniqueness of the correspondence between the projector and the camera images [45]. Single-shot systems use spatial encoding and this includes, but is not limited to, stripes with intensity [46] and and color [47] coding, M-array based coding [48], random speckle [36], and Fourier transform based phase reconstruction [49]. Multiple-shot systems use temporal coding to encode the surface of the scanned object, as in 12 the case of time multiplexing binary codes [50], and phase measurement profilometry [51]. Some hybrid techniques use both spatial and temporal coding to enhance performance [52]. Single-shot systems tend to have a fast acquisition time that is only limited by the camera acquisition speed. However, this comes at the cost of long processing time for the acquired images and sensitivity to ambient light and scanned object color changes [53]. Multiple shot systems are more robust to changes in ambient light and surface texture, but they are sensitive to the scanned object movement because the object is assumed to be static during the scanning process. Some multiple shot systems have been developed to scan moving objects by using more advanced fast cameras and projection systems so that the object is relatively static during the scanning process [54]. These types of systems are complex, large, and require tight synchronization between the active projector and the camera [55]. Laser scanners are also considered as a type of single-shot structured light sensors that project a single laser stripe or ring [56]. Laser scanners are robust and can yield high-accuracy 3D maps. On the other hand, these systems suffer from slow scanning speed since the laser stripe is mechanically translated to cover the entire scanned object. Laser scanners also suffer from the effect of speckle noise and difficulty in scanning shiny polished surfaces [57]. There are also some safety concerns from direct exposure of the eye to the focused laser beam. Combinations of structured light projectors (including lasers) and stereo cameras are widely used mainly to avoid the need for the relatively difficult projector-camera calibration [58, 59]. 2.2 Pinhole Camera Model A pinhole camera model with no lens distortion can be described through a direct linear transformation (DLT). DLT is a combination of the intrinsic camera matrix, rotation matrix, and translation matrix. Therefore, a camera system can be described by 𝑆𝑃𝐶 = 𝐾 [𝑅|𝑇]𝑃𝑊 , (2.2) 13 ©𝑋 ª ©𝑢 ª © 𝑓𝑥 0 𝑐𝑥 ª ©𝑟 11 𝑟 12 𝑟 13 𝑡1 ª ­­ ®® ­ ® ­ ® ­ ® ­𝑌 ® 𝑆 ­𝑣 ® = ­ 0 ­ ® ­ ® ­ ® 𝑓𝑦 𝑐 𝑦 ® ­𝑟 21 𝑟 22 𝑟 23 𝑡2 ® ­­ ®® , (2.3) ­ ® ­ ® ­ ® ­𝑍 ® ­ ® ­ ® ­ ® ­ ® 1 0 0 1 𝑟 31 𝑟 32 𝑟 33 𝑡3 ­ ® « ¬ « ¬ « ¬ 1 « ¬ 𝑃𝐶 is the image coordinates in homogeneous form, 𝑆 is a scaling factor, 𝑃𝑊 is the homogeneous P = (X, Y, Z) u u, v ) p=( x Optical axis ) (c x ,c y f zc xc z= (0, 0, 0) y v yc Figure 2.2: Pinhole camera model form of the of the 3D point in world coordinates, 𝐾 is the intrinsic camera matrix that specifies the focal lengths and the coordinates of the principal points and represents a projective transformation from the 3-D camera’s coordinates into the 2-D image coordinates. 𝑅 is the rotation matrix and 𝑇 is the translation matrix that are required to to transform the point from word coordinates to 3D camera coordinates. 𝑓 is the focal length and 𝑐 is the image principal point. The aforementioned model describes the image formation in the camera sensor after correcting the lens distortion. 14 2.3 Image Distortion In addition to the intrinsic camera matrix, distortion coefficients are needed to correct the effect of the radial and tangential distortion from the lens. The radial distortion occurs when the light rays are bent more on the edges of the lens than in the optical center, while the tangential distortion occurs when the lens and image plane are not parallel. Since these types of distortion are constant over time, they can be easily estimated and compensated for. One of the wellknown approaches is Brown–Conrady distortion model, which describes distortion by a series of higher order polynomials. The radial distortion to the pixel coordinates can be described by 𝑥′ = 𝑥(1 + 𝑘 1𝑟 2 + 𝑘 2𝑟 4 + 𝑘 3𝑟 6 ), (2.4) 𝑦′ = 𝑦(1 + 𝑘 1𝑟 2 + 𝑘 2𝑟 4 + 𝑘 3𝑟 6 ), (2.5) 𝑢 − 𝑐𝑥 𝑣 − 𝑐𝑦 √︃ 𝑥= ,𝑦 = , 𝑟 = 𝑥2 + 𝑦2, (2.6) 𝑓𝑥 𝑓𝑦 𝑘1, 𝑘2, and 𝑘3 are the coefficients of radial distortion. 𝑥 and 𝑦 are the normalized image coordinates of the undistorted pixel locations, and 𝑥′, 𝑦′ are the distorted coordinates.. Similarly, the tangential distortionis given by: 𝑥 𝑑𝑖𝑠𝑡𝑜𝑟𝑡𝑒𝑑 = 𝑥 + [2𝑝 1 𝑥𝑦 + 𝑝 2 (𝑟2 + 2𝑥 2 )], (2.7) 𝑦 𝑑𝑖𝑠𝑡𝑜𝑟𝑡𝑒𝑑 = 𝑦 + [ 𝑝 1 (𝑟 2 + 2𝑦 2 ) + 2𝑝 2 𝑥𝑦] (2.8) The 𝑝1 and 𝑝2 are the tangential distortion coefficients of the lens. 2.4 Stereo Camera Model Stereo camera parameters are required to map the pixels between two cameras. Those parame- ters describe the rotation and translation of one camera to the other. From the single-camera setup, we can extract each camera location with reference to the calibration board. Both of the images from stereo cameras are corrected to remove the effect of the lens distortion. The camera system 15 can be described by the following equation.     𝑥    𝑢     𝑓 𝑥 0 0           𝑅  𝑦   𝑆 𝑣  = 𝐾     , 𝐾 =  𝑠 𝑓 𝑦 0    (2.9)   𝑇   𝑧           1       𝑐𝑥 𝑐𝑦 1   1     Because we use the same type of cameras for system design, both of the cameras are assumed to have the same intrinsic matrix (described in the single-camera calibration part). The common matrices for the rotation and translation matrices and the relationship between the first camera location matrix in world coordinate and the second camera location matrix in world coordinate can be described by the following equations.     𝑥1  𝑥2           𝑦  = 𝑅12  𝑦  + 𝑇12 (2.10)  1  2          𝑧1   𝑧2      (𝑥 1 𝑦 1 𝑧1 ) and (𝑥 2 𝑦 2 𝑧 2 ) indicate Camera 1 and Camera 2 positions in the world coordinate respectively, 𝑅12 is the rotation matrix and 𝑇12 is the translation matrix. A transformation from each camera to the 3D point (X,Y,Z) is described by         𝑥1  𝑋  𝑥2  𝑋                   𝑦  = 𝑅1 𝑌  + 𝑇1 ,  𝑦  = 𝑅2 𝑌  + 𝑇2 (2.11)  1    2                    𝑧1  𝑍   𝑧2  𝑍          By combining the two equations from 2.11, we get     𝑥2  © 𝑥1  ª   ­  ®  𝑦  = 𝑅2 𝑅 −1 ­­  𝑦  − 𝑇1 ®® + 𝑇2    2 1 ­  1 (2.12)   ®   ­   ®  𝑧2   𝑧1    «  ¬ By comparing 2.10 and 2.12, the values of 𝑅12 and 𝑇12 can be calculated as follows: 𝑅12 = 𝑅2 𝑅1−1 , 𝑇12 = −𝑅2 𝑅1−1𝑇1 + 𝑇2 (2.13) Therefore 𝑅12 and 𝑇12 can be calculated by estimating the orientation of the camera with respect to an object with arbitrary coordinates. 16 CHAPTER 3 MULTI-COLOR MULTI-RING STRUCTURED LIGHT SENSOR WITH AUTOMATIC STABILIZATION 3.1 Introduction Structured light systems can be divided into single-shot and multi-shot systems [45]. Single- shot systems use spatial encoding and this includes, but is not limited to, stripes with intensity [46] and color [47] coding, M-array based coding [48], random speckles [36], and Fourier transform based phase reconstruction [49]. Multiple-shot systems use temporal coding to encode the surface of the scanned object as in the case of time multiplexing binary codes [50] and phase measurement profilometry [51]. There are also some hybrid techniques that use both spatial and temporal coding to enhance performance [52]. Single-shot systems tend to have fast acquisition time that is only limited by the camera acquisition speed. However, this comes at the cost of long processing time for the acquired images, and sensitivity to ambient light and scanned object color changes [53]. Multiple shot systems are more robust to changes in ambient light and surface texture, but they are sensitive to the scanned object movement because the object is assumed to be static during the scanning process. Some multiple shot systems have been developed to scan moving objects by using more advanced fast cameras and projection systems so that the object is relatively static during the scanning process [54]. These types of systems are complex, large, and require tight synchronization between the active projector and the camera [55]. Laser scanners are also considered as a type of single-shot structured light sensors that projects a single laser stripe or ring [56]. Laser scanners are robust and can yield high-accuracy 3D maps. On the other hand, these systems suffer from slow scanning speed since the laser stripe is mechanically translated to cover the entire inspected object. Laser scanners also suffer from the effect of speckle noise and difficulty in scanning shiny polished surfaces [57]. There are also some safety concerns from direct exposure of the eye to the focused laser beam. A single-shot approach with spatial encoding was adopted because it is suitable for 17 inspecting moving objects which is necessary for pipelines inspection sensors where the scanning platform is in continuous movement inside the pipe [60]. The dark environment inside the pipe also reduces the amount of interference to the projected pattern due to the lack of external illumination sources like ambient light. Single-shot systems also have simpler hardware requirements because they do not need active projection systems which simplifies the miniaturization of the sensor to be inserted inside small diameter pipes. In this chapter, we introduce an endoscopic SL sensor for the internal inspection of plastic gas pipelines. The sensor has the ability to reconstruct the 3D profile of the pipe internal surface and detect the existence of surface deformations and material loss defects. The sensor is a single-shot SL sensor with a simple and small size setup that consists of only a single camera and a slide projector. A multi-color multi-ring pattern with black slits is used to increase the robustness of the sensor and to deal with the limited depth of field of the projector. The multi-ring approach increases the number of 3D points acquired from each collected frame when compared to a single ring laser scanner. The increased number of data points plays an important role in facilitating the registration and stabilization process since it provides a more complete presentation of the pipe shape from only a single shot. We are also introducing an automatic procedure to electronically stabilize the acquired 3D frames. The algorithm exploits the geometry constraints from the cylindrical shape of the scanned pipe to estimate the orientation of the sensor inside the pipe. This stabilization algorithm is designed to reduce the effect of the continuous change in the orientation of the sensor when mounted to a moving robotic platform. The small size of the sensor and the embedded electronic stabilization algorithm allows for easy integration with the gas pipelines inspection robots. The remaining sections of the chapter are organized as follows: Section 3.2 provides a detailed description of the design and fabrication of the SL endoscopic sensor. Section 3.4 shows the details and steps for the sensor calibration, image segmentation and decoding, and point cloud registration. The experimental results are introduced in Section 3.5 and the conclusion of the chapter is given in Section 3.7. 18 3.2 Endoscopic SL Sensor Design and Fabrication 3.2.1 Sensor Design A structured light sensor consists of a projection module that projects a highly textured pattern and a camera that captures the deformations in the projected pattern [32]. The camera and projector are separated by a predefined distance and placed in an orientation that enables them to share a similar baseline to simplify the triangulation process. The proposed schematic of the SL sensor for cylindrical geometry is shown in Fig. 3.1.a. The setup consists of a projection module (P), and a camera (C) that is positioned in front of the projection module. The camera and the projection module are assumed to share the same main axis to simplify the triangulation process (a more general case is discussed in the calibration section). By using symmetry around the main axis, the 3D triangulation can be reduced to 2D and the distance in the radial and axial directions can be calculated for each angle 𝜙. Figure 3.1.b shows the triangulation process in the sensor based ρ 3D point φ z rp rc z fp fc d (a) (b) Figure 3.1: a) Schematic of the endoscopic SL sensor, b) Triangulation process in cylindrical geometry. on a pinhole camera model for both the projector and the camera. Where 𝑓𝑐 is the camera focal length, 𝑓 𝑝 is the projector focal length, and 𝑑 is the distance between the camera and projector. 𝑟 𝑐 is the position of the point in the camera image coordinates, and 𝑟 𝑝 is the position of the point in the projector image coordinates. By assuming equal focal lengths for the projector and the camera 19 ( 𝑓 = 𝑓 𝑝 = 𝑓𝑐 ), the cylindrical coordinates of the 3D point are given by: 𝑑 𝑟𝑝𝑧 𝑧= 𝑟𝑝 , and 𝜌 = . (3.1) 1− 𝑟𝑐 𝑓 These equations assume a single ring projection with a fixed projection focal length. The equations are generalized for multiple ring setup by having different 𝑟 𝑝 values for each projected ring. 3.2.2 Pattern Design The pattern was designed to be a sequence of color-changing stripes. Color coding is preferred over intensity coding because it increases the number of available codes and enhances robustness toward noise. It is worth noting that monochrome sensors can provide higher resolution and better low light capabilities due to the elimination of the Bayer filter and demosaicing process from the camera sensor. Each colored stripe is separated from its neighbor by a thin black slit to enhance the detectability of the edges during the decoding process. This approach also allows for the use of neighboring rings with the same colors which increases the number of available code words for a specific code word length. Initial experiments with direct color coding without the slits showed difficulty in reliably detecting the edges. This is due to the blurring effect from the shallow depth of field of the projection module. The final pattern was designed to have only three colors, red, green, and blue. These colors provide maximum separation in the RGB domain where they occupy three distinct corners. De Bruijn sequence generator was used to generate a sequence of non-repeated code words [47]. This scheme guarantees that a specific sequence with a length of 𝑛 will occur only once in the generated sequence. A sequence with 𝑘 characters and a code length of 𝑛 can generate a code sequence of a length of 𝑘 𝑛 stripes. Therefore, a projected pattern with three colors (red, green, blue), and a code length of three can generate 33 stripes. The location of the stripe can be decoded if three correct consecutive colors are detected. The color-coded sequence of the projected pattern can be seen in Figure 3.2.a. The color sequence is converted to the polar domain to create a pattern with 11 concentric color-coded rings as seen in Figure 3.2.b. 20 (a) (b) Camera Transparent Projection Heat sink glass module (c) Figure 3.2: a) Color-coded sequence of the projected pattern, b) Multi-color multi-ring pattern created from the color-coded sequence, c) Fabricated endoscopic structured light sensor. 3.2.3 Sensor Fabrication A static slide projector was used to reduce sensor complexity, size, and power consumption. This type of projectors can be scaled to have a diameter as small as 4mm [37]. The fabricated slide projector consists of a high-intensity light-emitting diode (LED), collimation lens, transparency slide, and projection lens. The high-intensity LED is the main light source that illuminates the pattern. The diverging light rays from the LED are gathered and focused on the slide via the collimator lens. The transparency slide is used to filter or attenuate specific wavelengths of the white light according to the texture printed on the slide. The slide was directly printed with an inkjet printer on a transparency film. Finally, a projection lens is used to direct and focus the image of the pattern on the projection plane. A transparent glass tube was used to connect the camera and the projector to enable the projection of the colored rings to the pipe walls. A Complementary metal- oxide-semiconductor (CMOS) camera was fitted with a wide-angle lens to increase the field of view 21 (FOV) and capture the projected rings. The distance (𝑑𝑡 ) between the camera and the projector is dependent on the projector and the camera fields of view and the inspected pipe diameter. To ensure that the largest projected ring in the pattern is within the field of view of the camera, the distance between the projector and the camera must satisfy: 𝑅 𝑅 𝑑𝑡 < − , (3.2) tan(𝜃 𝑝𝑙 /2) tan(𝜃 𝑐 /2) where 𝑅 is the radius of the inspected pipe, 𝜃 𝑝𝑙 is the projection angle of the outer edge of the ring, and 𝜃 𝑐 is the viewing angle of the camera. To ensure that the smallest ring in the pattern is not blocked by the projector body, the projection angle of its inner edge (𝜃 𝑝𝑠 ) must satisfy:   𝑅 𝑠𝑒𝑛 𝜃 𝑝𝑠 > 2 arctan , (3.3) 𝑑𝑔 where 𝑅 𝑠𝑒𝑛 is the sensor radius and 𝑑𝑔 is the length of the transparent glass tube. A picture of the sensor explaining its main components is shown in Figure 3.2.c. The final fabricated sensor has a diameter of 18 mm. The small sensor diameter allows for easy maneuverability of the sensor through elbow joints and facilitates the integration with robotic platforms. 3.3 Simulation Environment A simulation environment was created to help develop the algorithm in parallel with the ILI SL hardware design and testing. It also provides a controlled environment to test the robustness of the developed algorithms against different types of noise and interference conditions. The environment can also be modified to test different techniques under different environmental and noise conditions. The simulation environment can simulate a light source and a camera that are modifiable to perform different scanning tasks for different inspection targets shapes. The simulation geometry is created by using a ray tracing software (POV-ray) to simulate the structured light scanning process. Figure 3.3 shows the results of scanning a simulated geometry (bent pipe) with colored structured light sensor. Figure 3.3.a shows the 3D geometry of the inspected object, Figure 3.3.b shows a sample picture from the simulation results, Figure 3.3.c shows a point cloud created by combining the 3D reconstructions of multiple simulated frames, and Figure 3.3.d shows the final rendering. 22 (a) (b) (c) (d) Figure 3.3: Simulation results showing the scanning and reconstruction of a bend along the pipe: a) Simulated pipe, b) Sample simulated picture, c) Reconstructed point cloud from multiple simulated frames along the pipe, d) Final rendered surface of the pipe . 3.4 3D Surface Reconstruction 3.4.1 Sensor Calibration Accurate reconstruction of the scanned object surface requires precise determination of the sensor parameters. The parameters include the camera intrinsic parameters, camera lens distortion pa- rameters, angles of projection of the colored rings, and the stereo parameters between the camera and projector. The calibration procedure starts with calibrating the camera first [61], correcting the camera lens distortion, and then creating a set of 3D points to use for the projector calibration. A pinhole camera model can be described as follows: ©𝑥 ª ©𝑢 ª 𝑟 𝑟 𝑟 𝑡 © 11 12 13 1 ª ­­ ®® ­ ® ­ ® ­𝑦® 𝑆 ­𝑣 ® = 𝐾 ­ ® ­ ® ­𝑟 21 𝑟 22 𝑟 23 𝑡 2 ® ­­ ®® , (3.4) ­ ® ­ ® ­𝑧® ­ ® ­ ® ­ ® 1 𝑟 31 𝑟 32 𝑟 33 𝑡 3 ­ ® « ¬ « ¬ 1 « ¬ © 𝑓𝑥 0 𝑐 𝑥 ª ­ ® 𝐾 = ­ 0 𝑓𝑦 𝑐 𝑦 ® . ­ ® ­ ® ­ ® 0 0 1 « ¬ 𝑥, 𝑦, 𝑧 are the point world coordinates, 𝑢 and 𝑣 are the point’s image coordinates, and 𝐾 is the camera intrinsic matrix. 𝑟𝑖, 𝑗 are the rotation matrix elements and 𝑡𝑖 are the translation matrix elements. 23 𝑓 is the focal length and 𝑐 is the image principal point. The aforementioned model describes the image formation in the camera sensor after correcting the lens distortion. The projector cannot see the external environment; therefore, the calibrated camera is used to create a set of 3D points for the projector calibration. The process is performed in two steps. In the first step, a predefined pattern is used to obtain the orientation of the camera with respect to the calibration board. ArUco patterns were used because they can be identified even with partial obstruction of the calibration board. Once the pattern is identified, the camera orientation is calculated and the normal to the calibration board is determined. In the second step, the camera captures an image of the the projected rings on the same calibration board. With known camera parameters and board orientation, the 3D location of the projected rings on the board can be calculated by using ray-plane intersection. P0 u n P1 w PI V0 (a) (b) Figure 3.4: a) Ray-plane intersection, b) An acute cone with a main axis that is not parallel to the 𝑧-axis. In the beginning, the image points are normalized to have 𝑓 = 1 (homogeneous coordinates). 𝑃0 = 𝐾 −1 𝑝, (3.5) where 𝑃0 represents the homogeneous coordinates of the point and 𝑝 represents the image co- ordinates of the point with a center of (0, 0). The intersecting line with the calibration board is described by: −−−−→ 𝐿® = 𝑃®0 + 𝑠 𝑃0 𝑃1 = 𝑃®0 + 𝑠u, (3.6) 24 −−−−→ where 𝑠 is the magnitude of the unit vector 𝑃0 𝑃1 . Any point 𝑃 𝐼 belongs to the plane satisfies: n · ( 𝑃®𝐼 − 𝑉®0 ) = 0. (3.7) At the intersection point we have: 𝑃®𝐼 − 𝑉®0 = w + 𝑠 𝐼 u, where w = 𝑃®0 − 𝑉®0 . (3.8) Therefore: −n · w n · (w + 𝑠 𝐼 u) = 0, and 𝑠 𝐼 = . (3.9) n·u By calculating 𝑠 𝐼 , the position of the point 𝑃 𝐼 can be calculated by using Eq. 3.6. Figure 3.5.a explains the processes of data collection and 3D data generation with ray-plane intersection. After generating the 3D point set, the next step is to calculate the angle of each projected cone. Every ring is assumed to belong to the surface of an acute cone (𝜃 < 𝜋/2). Each cone has a main 𝑟𝑝 axis parallel to 𝑧-axis, a vertex 𝑉 and an angle 𝜃 (𝜃 = arctan( 𝑓 )) as explained previously in Figure 𝑝 3.1.b. Any point on this cone must satisfy the equation: 𝑟𝑝 𝜌 = tan 𝜃 = . (3.10) 𝑓𝑝 𝑧 Therefore, 𝜃 can be calculated as follows: 𝜃 = argmin(𝜌 − 𝑧 tan 𝜃). (3.11) In this approach, we are estimating the angle for the cone while assuming that the cone is sharing the same main axis with the camera as mentioned in subsection 3.2.1. This simplified assumption was only valid for simulation data with a tight camera and projector alignment. Small misalignment between the slide and the projection lens, or between the camera and the projector are unavoidable during the assembly process. Therefore, a more general form of the cone with its main axis not parallel to the 𝑧-axis was considered as shown in Figure 3.4.b. With this model, the camera continues to be the reference while the projector is represented by a cone with a main axis ( 𝐴)® that is not parallel to the 𝑧-axis. This cone can be described as follows: 𝑃® − 𝑉® 𝐴® · = cos 𝜃. (3.12) 𝑃® − 𝑉® 25 Therefore, the cone parameters can be calculated as follows: ® 𝑉)® = argmin( 𝐴® · 𝑃® − 𝑉® (𝜃, 𝐴, − cos 𝜃). (3.13) 𝑃® − 𝑉® 𝜃 is estimated for each ring while the main axis 𝐴® and the vertex V are assumed to be the same for all the rings. The values of 𝜃 are estimated for all the rings at the same time with a constraint of 𝜃𝑖 < 𝜃𝑖+1 . Figure 3.5.b shows the collected 3D data for a single ring and Figure 3.5.c shows a plot of the estimated cone surface overlaid on the collected 3D data points. Image with no SL Distortion correction Pose estimation pattern Ray-plane intersection Image with SL Distortion correction Extraction of ring 3D points from pattern edges (a) (b) (c) Figure 3.5: a) 3D data points generation, b) 3D data collected for a single ring, c) Estimated cone surface overlaid on the collected 3D data points. To have a practical error measurement, we calculate the distance (D) from the estimated cone 26 surface to each of the 3D points: 𝐷 = 𝑃® − 𝑉® sin(𝜃 − 𝜃 𝑒 ), (3.14) where 𝜃 𝑒 represents the estimated projection angle of the cone while 𝜃 represents the angle between ® and can be calculated directly by using Eq. 4.2. The calculated the point 𝑃 and the vector 𝐴, value of the root mean square error from the calibration process was 2.1493 mm. The relatively large value of the calibration error can be related to the use of a general-purpose ink-jet printer with limited resolution for the printing of the projection pattern. The limited printing resolution generates artifacts in the projected pattern as can be seen in Figure 3.6.a. The calibration algorithm assumes a perfectly circular pattern projection; therefore, these imperfections cause artifacts in the final reconstruction and increase the calibration errors. This problem is dealt with during the final reconstruction stage by performing further calibration inside a clean pipe with a known diameter as explained in Section 3.5.1. (a) (b) (c) 6 z-axis (in) 5 4 3 2 2 0 0 -2 -2 y-axis (in) x-axis (in) (d) (e) (f) Figure 3.6: Image segmentation and 3D reconstruction, a) Acquired image by the sensor, b) Image converted to the polar domain, c) Second derivative along the radial direction, d) Extracted edges of the dark slits, e) Dark slits centroids overlaid on the color image, f) 3D reconstruction of the acquired image. 27 3.4.2 Image Segmentation Decoding The SL sensor depends on measuring the amount of deformation in the projected pattern to reconstruct the shape of the scanned object. Therefore, the reconstruction algorithm focuses on measuring the displacement in the positions of the dark slits that separate the projected colored rings. This subsection describes the process of detecting, localizing, and matching the projected edges. The sensor performance can be evaluated by two factors, the precision of the sensor in calculating the dark slits position and its ability to scan complex surfaces. The first factor depends on the capability of the sensor to accurately localize the center of each projected dark slit. The second factor depends on the ability of the sensor to detect projected dark slits and reject outliers from the scanned object texture. The SL sensor images are acquired with YUV 420 format, where maximum resolution is retained for the intensity image while the bandwidth is reduced by lowering the resolution of the chroma data. A sample RGB image acquired by the sensor is shown in Figure 3.6.a. The reconstruction process starts by performing data cleaning and noise reduction with Gaussian low pass filtering followed by median filtering. The acquired image is then converted to the polar domain to simplify the detection of the edges in the radial direction as shown in Figure 3.6.b. Only the Y channel is used for edge detection since it possesses all the required intensity changes information. The gradient of the image across the radial direction is calculated by applying a smoothed gradient kernel twice. The first kernel results in positive values for the rising edges and negative values for falling edges. The filter is then applied again to separate the dark slits as shown in Figure 3.6.c. A global threshold is then calculated and applied to extract a binary mask. The mask is cleaned with morphological cleaning operations and the edges are then extracted. Connected component analysis is then used to remove short random edges to further clean the mask and reduce the possibility of having false edges as shown in Figure 3.6.d. These edges represent a rough estimation of the edges of the dark slits between the colored stripes. The location of the center of the dark slit (𝑖 𝑐 ) is calculated by 28 using the gray gravity method (GGM) as: Í𝑁 𝑖=0 (𝑥(𝑖) ∗ 𝑖) 𝑖𝑐 = (3.15) 𝑁 (𝑁 + 1)/2 𝑖 is the index of the pixel and 𝑥(𝑖) is the value of the 𝑖 𝑡ℎ pixel. GGM takes into account the value of the pixel in the image and its distance from the initial slit center. An image of the detected edges overlaid on the polar image is shown in Figure 3.6.e. The extracted edges are labeled by determining the actual colors of their two neighbors. Each color is given a value that is related to its order in the RGB matrix (red=1, green=2, blue=3). An edge with red and blue neighbors is given a label of 13. A vector of the detected edges is created for each angle in the polar image. The vector is compared to the projected sequence by using a sliding window with a length that is dependent on the length of the codeword used during the pattern design. The window tests the matching of the detected edges against the projected sequence. If a match is found, the code in the window is registered as a match and removed from the queue. If no match is found a single character is moved to the left and the matching process is restarted again until all the detected sequence is exhausted. Calibration data are then used to project the detected edges to their actual 3D coordinates as shown in Figure 3.6.f. 3.4.3 Sensor Stabilization and Point Cloud Registration The sensor is designed to be attached to a scanning platform that moves along the pipe during the 3D imaging process. Every single frame produces a sparse reconstruction of the pipe surface with a density that is dependent on the number of projected rings. In an ideal situation, the sensor is assumed to be in the middle of the pipe and always pointing in the direction of the scanning platform movement (parallel to the pipe main axis in the 𝑧-direction). In this case, the reconstructed 3D frames can be stacked sequentially by only adding a displacement in the 𝑧-direction that is dependent on the scanner speed at the time of acquisition. Experimentally, this assumption is not practical because it is difficult to maintain the sensor to be located exactly in the middle of the pipe and pointing in the forward direction. In this section, an algorithm is developed to estimate both the 29 orientation of the sensor and its position inside the pipe for each acquired frame. The algorithm estimates the center of the reconstructed surface and the orientation of its principal axis. An ellipsoid is fitted to the point cloud to estimate its orientation with the assumption that the scanned pipe has a smooth surface with no large defects. The ellipsoid model was employed to improve the algorithm robustness toward cases of deformation in the shape of the inspected pipe like a pipe with an oval shape or a pipe with a small dent. The model can also handle errors from the calibration process which can result in producing a 3D profile with an oval shape. The general equation of a conic is given by: 𝐴𝑥 2 + 𝐵𝑦 2 + 𝐶𝑧 2 + 2𝐷𝑥𝑦 + 2𝐸𝑥𝑧 + (3.16) 2𝐹 𝑦𝑧 + 2𝐺𝑥 + 2𝐻𝑦 + 2𝐼𝑧 + 𝐽 = 0. In a matrix form, Eq. 3.16 can be written as: xT Qx = 0 (3.17) ©𝑥 ª © 𝐴 𝐷 𝐸 𝐺ª ­ ® ­ ® ­ ® ­ ® 𝑇 ­𝑦® ­𝐷 𝐵 𝐹 𝐻 ® x = (𝑥, 𝑦, 𝑧, 1) , x = ­­ ®® , Q = ­­ ®. ® ­𝑧® ­𝐸 𝐹 𝐶 𝐼 ® ­ ® ­ ® ­ ® ­ ® 1 « ¬ « 𝐺 𝐻 𝐼 𝐽 ¬ For an ellipsoid, the constraint is 4𝐽 − 𝐼 2 > 0, where 𝐼 = 𝐴 + 𝐵 + 𝐶, and 𝐽 = 𝐴𝐵 + 𝐵𝐶 + 𝐴𝐶 − 𝐷 2 − 𝐸 2 − 𝐹 2 [62]. In our case, a less strict linear constraint of 𝐴 + 𝐵 + 𝐶 = 1 was sufficient [63]. After applying the constraint, Eq. 3.16 can be written as follows:     2 2 𝐹 (X, a) = Xa = 𝐴 𝑥 − 𝑧 + 𝐵 𝑦 − 𝑧 + 2 2 2𝐷𝑥𝑦 + 2𝐸𝑥𝑧 + 2𝐹 𝑦𝑧 + 2𝐺𝑥 + 2𝐻𝑦 + 2𝐼𝑧 + 𝐽 (3.18) = −𝑧2 . In a matrix form, we have the following problem: Xa = b, (3.19) 30   X = 𝑥 2 − 𝑧 2 , 𝑦 2 − 𝑧 2 , 2𝑥𝑦, 2𝑥𝑧, 2𝑦𝑧, 2𝑥, 2𝑦, 2𝑧, 1 , a = ( 𝐴, 𝐵, 𝐷, 𝐸, 𝐹, 𝐺, 𝐻, 𝐼, 𝐽)𝑇 , b = −𝑧2 . To estimate a, we minimize the sum (𝑆): 𝑁 ∑︁ 𝑆= ( 𝐹 (X, a) − b) 2 , (3.20) 𝑖=1 a = X′X −1 X′b.  (3.21) Xm has a dimensions of N by 9, ahas a dimensions of 9 by 1, and b has a dimension of N by 1. After calculating the ellipsoid parameters, the center of the ellipsoid c can be calculated as follows: −1 ©𝑥0 ª © 𝐴 𝐷 𝐸 ª ©𝐺 ª ­ ® ­ ® ­ ® c = ­𝑦0® = ­𝐷 𝐵 ­ ® ­ ® ­ ® 𝐹 ® ­𝐻 ® . (3.22) ­ ® ­ ® ­ ® ­ ® ­ ® ­ ® 𝑧 𝐸 𝐹 𝐶 𝐼 « 0¬ « ¬ « ¬ The principal axes of the ellipsoid are estimated by calculating the eigenvectors and eigenvalues of Q. Where the eigenvectors represent the directions of the main axes, while the eigenvalues represent the amount of the ellipsoid’s stretch in each of the three axes. To restore the reconstructed pipe section to the correct alignment, the center of rotation is estimated, shifted to the origin (origin= (0, 0)), and counter-rotation is applied. The center of rotation is calculated by estimating the intersection point between the pipe principal axis and the plane 𝑧 = 0. The intersection point is calculated by using the ray plane intersection algorithm developed in Subsection 3.4.1. Counter rotation is then applied by multiplying the point cloud data by the rotation matrices as follows: 𝐷 𝑐 = 𝑅𝑥 𝑅 𝑦 𝐷, (3.23) ©1 0 0 ª ­ ® 𝑅𝑥 = ­0 cos (𝜃 𝑥 ) ­ ® − sin (𝜃 𝑥 ) ® , ­ ® ­ ® 0 sin (𝜃 𝑥 ) cos (𝜃 𝑥 ) « ¬   © cos 𝜃 𝑦 0 sin 𝜃 𝑦 ª ­ ® 𝑅𝑦 = ­ ­ ® 0 1 0 ®, ­ ® ­  ® − sin 𝜃 𝑦 0 cos 𝜃 𝑦 « ¬ 31 where 𝐷 represents the original data, 𝐷 𝑐 represents the data after correction, and the 𝜃 𝑥 , and 𝜃 𝑦 are the rotation angles around the 𝑥 and 𝑦 axes respectively. (a) (b) (c) (d) Figure 3.7: Alignment correction of simulated data, a) Original data, b) Data after shifting and rotation, c) Ellipsoid surface fitted to the data, d) Data after applying alignment correction. Figure 3.7 shows an example of alignment correction for data with a cylindrical shape, a height of 6 inches, and a radius of 3 inches. 45 degrees of the cylinder data were removed to simulate the shadowing by the camera cables. A uniformly distributed random noise with an interval from −0.25 to 0.25 inches was added to simulate noise in the acquired data as shown in Figure 3.7.a. The data were rotated by (10, 10) degrees around 𝑥 and 𝑦 axes and then shifted by (1, −1) inches in the 𝑥 and 𝑦 directions as shown in Figure 3.7.b. The data were successfully fitted to an ellipsoid as shown in Figure 3.7.c. The estimated parameters are (10.0539, 10.0205) for the rotation angles and (0.9814, −0.9881) for the shift in the 𝑥 and 𝑦 directions. Alignment correction is then applied by shifting the center of rotation to the origin and applying counter-rotation. The data after alignment correction are shown in Figure 3.7.d. Demonstration with experimental data is shown in Figure 3.8. The data acquired by the sensor are shown in Figure 3.8.a. The data are downsampled first 32 to reduce the effect of noise and small defects on the pipe surface as shown in Figure 3.8.b. The data were then fitted to estimate the orientation of the sensor as shown in Figure 3.8.c. The rotation angles were estimated to be (5.3, 0.02) and the shifts were estimated to be (0.1509, −0.3611). The corrected data are shown in Figure 3.8.d. To register multiple frames, the corrected data are stacked by adding a constant displacement in the 𝑧-direction for each acquired frame. The displacement in the 𝑧-direction is dependent on the scanner speed at the time of acquiring each frame (constant displacement was used since the scanning platform was moving at constant speed). The rotation around the 𝑧 axis cannot be estimated with this algorithm because the pipe is symmetrical around the axis of rotation. Rotation around the 𝑧-axis will be addressed in the future by adding input from an inertial measurement unit. (a) (b) (c) (d) Figure 3.8: Alignment correction of experimental data, a) Original data, b) Data after downsam- pling, c) Ellipsoid surface fitted to the data, d) Data after applying alignment correction. 33 3.5 Discussion and Performance Evaluation In this section, we discuss the experimental results and the tests performed to evaluate the performance of the sensor. The first test was performed in a controlled lab environment to validate the working concept of the sensor. The second test was performed to simulate a more realistic scenario where the sensor was mounted to a robotic platform to scan a yellow MDPE pipe used for gas distribution. 3.5.1 Inspection of PVC White Pipe A 6-inch (152.4 mm) polyvinyl chloride (PVC) pipe was used to evaluate the performance of the structured light module in a controlled lab environment. An artificial defect with a rectangular shape was extracted from the pipe wall and the outer surface was covered to enable the reconstruction of an almost 100% wall material loss defect profile. Internal image of the defect is shown in Figure 3.9 a. The pipe was scanned by attaching the sensor to a scanner that moves in a parallel direction to the pipe’s main axis. Three hundred frames were collected at a rate of 15 frames/second with a scanning platform that moves at a speed of 0.5 inch/second (12.7 mm/s). Each frame was segmented and then triangulated by using the calculated calibration parameters. Only a single ring was used to create the main 3D profile of the pipe while the other rings were used for the sensor alignment estimation. In order to have a successful registration, data from each frame were used to estimate the sensor orientation by using the orientation estimation procedure developed in Subsection 3.4.3. Direct application of the stabilization algorithm to the collected data was successful when estimating the sensor position inside a clean pipe section but failed when the algorithm reached the location of the defect. Due to the large size of the defect, the down-sampling was not sufficient to reduce the defect effect on the final estimated sensor orientation. To have a successful estimation, the defect area was not included in the sensor pose estimation (data at specific angles range was excluded). Increments of 0.033 inches (0.8382 mm) in the 𝑧-direction were then added to each collected frame 34 before adding it to the main point cloud. 3.5.1.1 Artifacts Correction The calibration algorithm assumes a perfectly circular pattern projection; therefore, the imperfec- tions in the projected pattern appear in the final reconstruction as deformations in the pipe wall. In order to correct these artifacts, further calibration was performed by acquiring a single frame from a cylindrical pipe with no damage. This frame serves as a calibration frame where its radius values are subtracted from each newly reconstructed frame and then a constant value that represents the calibration pipe radius is added to all the reconstructed points. This approach helped in suppress- ing the reconstruction artifacts and produced a uniform pipe surface. A comparison between 3D reconstruction before and after applying this approach is shown in Figure 3.9.c and Figure 3.9.d respectively. (a) (b) (c) (d) Figure 3.9: Scanning of white PVC pipe, a) Internal image of the scanned pipe showing the rectangular defect, b) Sample image from the sensor showing the rectangular defect, c) Original reconstruction of the pipe surface, d) Reconstructed pipe surface after performing artifacts correc- tions. 3.5.1.2 Effect of Blind Spots The final 3D reconstruction of the pipe section is shown in Figure 3.10.a. The pipe surface was generated by using screened Poisson surface reconstruction. The 3D profile shows a cylindrical pipe 35 with a defect shape that matches the scanned defect profiles. Further analysis of the reconstruction can be seen in Figure 3.10.b where the point cloud is converted to a heat map that represents the radius of the pipe at each reconstructed point. (a) (b) Pipe wall Wall loss Artifact Proj. Cam. Defect Blind spot (c) (d) Figure 3.10: Scanning of white PVC pipe, a) Final 3D rendering of the pipe surface, b) 3D point cloud of the pipe with the pipe radius as color map, c) Schematic of the SL sensor inside the pipe showing its blind spot, d) Cylindrical view of the pipe surface. One of the observations from this figure is that there is an error in the reconstruction of the area in the red color. This error is related to the fact that the scanner cannot see this area as shown in Figure 3.10.c. This area represents a blind spot for the scanner because the scanner is not perpendicular to the scanned surface. The length of the area can be estimated by knowing the projection angle (𝜃 𝑝 ) of the projector and the depth of the defect (𝑊). The length of the shaded area (𝐿 𝑆 ) can be calculated as follows: 𝐿 𝑆 = 𝑊 tan(𝜃 𝑠 ), and 𝜃 𝑠 = 𝜋/2 − 𝜃 𝑝 /2 (3.24) Eq. 3.24 indicates that the length of the shaded area can be reduced by increasing the projection angle and reducing the depth of the defect. Since we have no control over the depth of the defect, 36 we can only increase the projection angle. Increasing the projection angle is possible but it is also limited by the view angle of the camera and the distance between the camera and the projector. The results indicate that the sensor can reconstruct the objects successfully but the performance deteriorates when dealing with abrupt height changes due to the creation of the shaded areas. This issue is not related to the structured light scanning sensor itself but to the limitations of using only a single camera and projector setup. One possible solution is to use two SL sensors in a back-to-back configuration so that blind spots for the first sensor are covered by the second sensor and vice versa. With this solution, a defect can be fully reconstructed if its minimum length is larger than 2𝐿 𝑆 (𝐿 𝑆 is dependent on 𝑊). 3.5.1.3 Evaluation of Error in the Final 3D Profile To evaluate the amount of error in the reconstructed 3D profile, the 3D data were converted to the polar domain. The radius of the inspected pipe (3 inches) was then subtracted from the data to identify the differences as shown in Figure 3.10.d. The RMS value of the error in the 3D profile excluding the defect region is ±0.3922 mm. The RMS value of the error on the surface of the defect is 0.1940 mm (artifacts from the blind spot were not included). The error in the estimated defect surface was calculated by comparing it to the thickness of the inspected pipe wall (7.77mm). 3.5.2 Inspection of MDPE Yellow Pipe with Non-Stabilized Sensor Plastic pipes used in the gas distribution can have different colors like white, yellow, and orange. In this section, we discuss the performance of the sensor in scanning 5.6-inch (142.24 mm) yellow MDPE pipe. Multiple artificial defects were introduced to the pipe internal walls. The first defect is an impingement defect and the second defect is a wall loss defect in the circumferential direction. The sensor was then mounted to a snake robot to test the effectiveness of using electronic stabilization. The projector pattern was designed to have only three colors, red, green, and blue to provide maximum separation in the RGB domain where they represent three distinct corners. This design 37 (a) (b) 200 200 Blue 100 Blue 100 0 0 50 50 100 100 150 200 150 200 250 150 200 150 200 250 250 50 100 250 50 100 Red Green Red Green (c) (d) Figure 3.11: The effect of scanning yellow MDPE pipes, a) RAW sensor image with automatic white balance, b) Raw sensor image with modified white balance, c) Color distribution of the stripes with automatic white balance, d) Color distribution of the stripes with modified white balance. is based on having a white inspected surface where the surface reflects visible light wavelengths evenly. This assumption is not completely valid when scanning pipes with different colors like the yellow MDPE. A raw image from scanning a yellow pipe is shown in Figure 3.11.a, and the color distribution of the stripes is shown in Figure 3.11.c. From the color distribution, it can be noticed that the color contrast of the stripes has deteriorated and it is difficult to easily separate the colors. To solve this problem, the white balance of the camera was modified to provide maximum color separation (the gains of the RGB channels were modified during the acquisition phase). The results after modifying the camera white balance are shown in Figure 3.11.b and Figure 3.11.d. The results indicate that the sensor can be adapted to scan pipes with different surface colors by modifying the white balance of the camera. Special profiles can be created for pipes with different colors and can 38 (a) (b) Figure 3.12: Scanning of 35-inch long pipe section, a) 3D rendering of the pipe surface, b) Cylindrical view of the pipe surface. be loaded before scanning the pipes. Another challenge in this scenario was the misalignment and vibration from the omniwheels of the robotic platform. Direct registration without the use of the stabilization algorithm was not possible due to the continuous change of the sensor orientation inside the pipe. Therefore, the stabilization algorithm developed in subsection 3.4.3 was used to estimate the orientation of the sensor for each collected frame. Frequency domain filtering was then used to suppress the periodic patterns from the vibration of the robotic platform. The 3D reconstruction of the pipe section is shown in Figure 3.12.a. To facilitate viewing the entire pipe section at once, the reconstructed profile is converted to the cylindrical domain to view the pipe as a single sheet as shown in Figure 3.12.b. In this view, both the impingement and circumferential wall loss can be easily identified. 3.6 Small Diameter Sensor One of the challenges during the inspection of gas distribution pipelines is the small diameters of the pipes used in the distribution network. In this section we are introducing the design of a sensor for the inspection of pipelines with diameters from 2 to 4 inches. In this sensor, the camera with the fisheye lens that was used in the previous sections was exchanged with an omnidirectional camera. The omnidirectional camera consists of a spherical mirror that is coupled with a camera with a narrow field of view to only capture the surface of the coupled mirror. In addition to the increased field of view, the mirror camera also offers reduced distance between the camera and the 39 1. Projector 2. Mirror 3. Camera (a) (b) (c) Figure 3.13: Small diameter sensor, a) Schematic, b) Picture of the fabricated sensor, c) Projected pattern. (a) (b) (c) Figure 3.14: a) Projected pattern, b) captured pattern by camera, c) reconstruction from a single ring and d) rendered profile. projector which is crucial for small diameter pipe sensor design. A schematic of the new sensor with the omnidirectional mirror is shown in Figure 3.13.a, a picture of the fabricated sensor is shown in Figure 3.13.b, and the projected pattern is shown in Figure 3.13.c. Figure 3.14 and Figure 3.15 show the inspection process of smooth cylindrical pipe and a pipe section with a cardboard insert respectively. 3.7 Conclusion In this chapter, we presented an endoscopic structured light based inspection sensor for the inspection of gas pipelines’s internal walls. The sensor produces a 3D map of pipe internal walls to detect the existence of surface defects. The tool can detect the existence of wall material loss and generate a 3D profile of the detected defect. The tool acts as an endoscopic camera with an added projector to provide texture to the pipe surface. Concentric color rings with dark slits were used to provide spatial encoding to the smooth pipe surface. A calibration procedure was developed to 40 (a) (b) (c) (d) Figure 3.15: Small diameter sensor, a) Schematic, b) Picture of the fabricated sensor, c) Sample picture from scanning 4 inch white PVC pipe, d) 3D reconstruction of a cylindrical pipe surface. extract the camera and projector parameter and enable accurate reconstruction of the pipe defects. We also introduced a simplified procedure for electronically stabilizing the sensor to reduce the sensor misalignment and enable the registration of the frames with a tilted sensor. The sensor can be adapted to scan pipes with different colors and surface conditions by tuning the parameters of the acquisition camera without requiring changes to the sensor hardware. The sensor performance was evaluated by using PVC and MDPE pipes and it was able to detect the existence of defects and successfully reconstruct their profile. 41 CHAPTER 4 RGB-D SL SENSING WITH AUTOMATIC CALIBRATION 4.1 Introduction Structured light sensors project textured coded pattern to provide the inspected surface with the features needed for triangulation; therefore, SL systems can inspect surfaces without the need for unique surface features. This property is essential because it enables the inspection of smooth and featureless surfaces like the walls of plastic gas pipelines. While structured light systems are robust and compact, these systems have some drawbacks that limit their versatility. The first drawback is that the existence of the SL patterns limits the ability of the operator to perform a direct visual inspection and complicates the monitoring of color changes on the inspected surface. Phase shifting profilometry addresses this problem by decoding the projected sequence and extracting the DC component of the acquired image sequence. While this is relatively easy for phase measurement profilometry systems, the case is not the same for other SL systems that use single-shot acquisition schemes such as laser scanners and multiring structured light systems [64, 65]. These systems depend on only a single frame to perform the 3D reconstruction to handle the reconstruction of moving objects. Therefore, it is difficult to separate the projected pattern and acquire high-quality ambient light images. One of the solutions for single-shot based techniques is to use two different vision bands such as using infrared cameras to image an infrared laser ring while using a colored camera to inspect the pipe surface [66]. While this approach is robust, it also increases the complexity and the size of the sensing system by requiring two separate sensing systems for RGB imaging and depth acquisition. The second problem that is addressed in this chapter is projector calibration, which is one of the most time-consuming tasks regarding structured light sensors. Calibration is needed because it is almost impossible to mitigate manufacturing errors during the sensor fabrication process [67]. Therefore, many structured light sensors adopt a stereo-assisted SL scheme to avoid the need 42 for camera projector calibration [68, 69]. This type of setup is flexible, and the stereo camera calibration can yield highly accurate stereo parameters. Camera calibration for stereo-assisted systems is relatively easy since the camera is aware of the surrounding environment. Therefore, the calibration process can be easily performed by imaging an object with a set of unique features with a predetermined 3D position. One drawback of this approach is the added system complexity, increased system size, and doubling in the amount of acquired data since two cameras are used instead of one. Stereo systems also require tight synchronization between the cameras because the stereo camera pair needs to be triggered at the same time. Adding a second camera to the setup is also difficult when having very tight dimensions, like in the case of small-diameter pipelines. For sensors with a single camera and projector, there is a need to calibrate both the camera and the projector. Since the light projector cannot recognize the outside environment, all the calibration algorithms depend on using the camera to help calibrate the projector [70, 71, 72]. The general procedure is to acquire an image of a calibration plane with predefined markers and then acquire another image of the same plane with the SL pattern projected on it [73, 74]. These two images produce a set of points with known 3D coordinates that are used to inversely calculate the projector’s intrinsic parameters and the stereo matrix between the camera and projector. A set of these image pairs are acquired at different orientations to have a large number of constraints for the successful estimation of the calibration parameters. These approaches use a predefined pattern to estimate the camera position, which is problematic because the markers interfere with the projected patterns, which leads to an incomplete reconstruction of the projected pattern [75, 76, 77]. The data acquisition is also lengthy and prone to errors from moving the sensor or the calibration plane while switching between ambient light and structured light patterns. SL sensors working in a pipe environment require frequent re-calibration. The frequent re-calibration is needed because the inspection system is usually under mechanical stresses from colliding with the pipe walls and vibration during the scanning rig movement. The result is a large amount of wasted time calibrating the sensor whenever a new calibration is needed. The calibration process is also challenging to perform outdoors in the field since it requires a relatively dark environment to decode the structured light pattern. 43 This chapter proposes a sensor with a synchronous sequential acquisition scheme where the sensor alternates between 3D profiling and RGB imaging. The sensor adopts a structured light source for surface encoding, an LED array for white light illumination, and a single RGB camera for image acquisition. The result is added RGB acquisition capability to the SL sensor with minimal changes to the sensor design and no effect on the sensor size and diameter. In addition to the sensor design, an algorithm was developed to automatically calibrate the projector of the structured light sensor with minimum user effort. The calibration procedure takes advantage of the cylindrical nature of the inspected pipes to create a set of geometric constraints that help the estimation of the intrinsic and extrinsic sensor parameters. The algorithm uses direct images from the sensor inside the pipe to estimate the sensor orientation, automatically calculate the projector’s intrinsic and extrinsic parameters and eliminate the need for the reference calibration points. This calibration scheme also enables outdoor field calibration of the structured light sensor because the calibration process is performed inside a closed pipe environment. The following sections will provide a detailed description of the sensor design and hardware, synchronization and acquisition process, automatic calibration theory, simulation and experimental validation of the calibration algorithm, and finally, a summary and conclusion of the chapter. 4.2 Sequential RGB-D Sensor 4.2.1 Sensor Design The sensor can be considered as a structured light sensor with added white light source or as an LED-assisted camera with added structured light source. Therefore, the sensor can provide surface texture RGB imaging and generate a 3D profile of the inspected surface. The sensor adopts a single camera to monitor the pipe surface with white light and capture the SL pattern deformations. The camera is accompanied by a slide projector that projects a set of concentric rings and an LED array that provides white illumination. The projection pattern is designed as a set of concentric colored rings that are separated by thin black slits. The usage of black slits improves the detectability of the edges and allows the usage 44 of neighboring stripes with the same color to increase the number of codes words. The projected color sequence was generated by using the De Bruijn sequence to ensure the uniqueness of the codewords within the projected sequence [47]. For the proposed sensor, a sequence of nine colored stripes was generated by using a code with a length of two, and three colors. Red, green, and blue colors were chosen to simplify the color identification during the decoding process, and the final pattern is shown in Figure 4.1.a. The white light illumination is provided by a ring that consists of (a) (b) Figure 4.1: a) Projection pattern, b) LED ring. four LEDs. The ring was designed to have the same outer diameter as the maximum diameter of the SL sensor, which is 18 mm. The inner diameter of the ring was constrained by the diameter of the lens thread (m12); therefore, it was set to 12.2 mm. Thus, the circuitry components were laid in a relatively tight space of 5.8 mm in the radial direction. The LED was chosen to be XLAMP-XD16, which has a square shape and dimensions of 1.6 by 1.6 mm2 . The array thickness was governed by the PCB thickness (1.57 mm) and the LED height (0.85 mm), which resulted in a total thickness of 2.42 mm. The sensor uses an IMX219 camera with a fisheye lens. The camera provided a 5mm space behind the camera lens; therefore, the array was integrated easily within the sensor without increasing its total width on length. A schematic of the designed LED ring is shown in Figure 1.b. The camera LED assembly is connected to the projector with a transparent glass connector to ensure an unobstructed projection of the pattern on the inspected pipe surface. A picture of the fabricated sensor with annotated key modules is shown in Figure 4.2. 45 Figure 4.2: Fabricated RGB-D sensor. 4.2.2 Sensor Synchronization The camera sensor is usually built as an array of photodetectors that are generally aligned in the shape of a rectangular or square matrix. Each photodetector can be simulated as a bucket that is opened at the beginning of the exposure time to be filled with light photons. The number of gathered photons over a specific time (exposure time) reflects the light intensity sensed by the camera for that particular photodetector (pixel). CMOS sensors can be classified by the method of pixel reading mechanism into two types: rolling shutter and global shutter sensors. Figure 4.3 shows a rolling shutter sensor that is synchronized with a flashlight pulse. In a rolling shutter sensor, the camera cannot read all pixels at the same time, and the sensor matrix is read sequentially (row by row). Each row starts its exposure at a different time to ensure that all the pixels have Figure 4.3: Synchronization with rolling shutter camera. the same exposure time. In other words, even if the shutter speed is increased (shorter exposure 46 time), the speed of acquisition will be slowed by the sensor’s readout speed. The effect of this type of sensors reading method can be recognized as a spatial distortion when imaging fast-moving objects. Another downside of using this type of sensor is that different regions of the image can experience slightly different illumination when using a fast-changing illumination source. Global shutter sensors, on the other side, can stop the exposure of all pixels at the same time and then read the pixels’ values. As a result, global shutter sensors do not suffer from the aforementioned drawbacks of the rolling shutter sensors, but this comes with a higher fabrication cost, more complexity, slower maximum framerate, higher readout noise, and lower dynamic range. Therefore, rolling shutter sensors provide better performance for low light situations such as the environment inside a pipeline. To overcome the rolling shutter problem, digital single-lens reflex (DSLR) cameras use me- chanical shutters. Mechanical shutters stop the exposure of all pixels simultaneously; therefore, high shutter speeds can be achieved even with slow reading sensors. Mechanical shutters provide a precise and robust solution for still photography with DSLR cameras, but they are unreliable for continuous video acquisition and high frame rates. Their large size also represents a problem in fitting them to small-size camera sensors. An alternative solution for dark environments is to modify the existing rolling shutter sensors to have the global shutter-like capability with careful light source synchronization. The enclosed atmosphere inside a gas pipeline provides a dark environment with no access to ambient light; therefore, the only light source in this environment comes from the artificial illumination in the sensor head. The dark environment simplifies the design of the sensor, reduces the interference from ambient light to the structured light pattern, and provides tight control over the imaged scene illumination. There is no need to block the incoming light toward the camera sensor; instead, a light source is required to provide illumination. Therefore, the dark environment will act as a natural shutter for the camera sensor. As explained in Figure 4.3, the frame exposure starts in a dark environment with no illumination. An intense flash is then triggered in the middle of the frame exposure for a short time, followed by sensor readout (the pulse period must be smaller than 47 the exposure to guarantee even illumination across the frame). The implementation of this solution provides the following advantages: a) Allow synchronized acquisition by alternating between the projector and white LED, 2) Solve rolling shutter problem and thereby reduce spatial distortion, 3) Better power and thermal efficiency due to the shorter working time for the projection, and 4) Reduce motion blur due to the faster shutter speeds when the flash power is excited and concentrated in a short time. The embedded acquisition platform provides the ability to monitor the camera control signal. The signal is raised when the end of the first line in the frame is received from the sensor and falls when receiving the end of the last line in the frame. The exposure time starts after time 𝑇 from receiving the last line in the previous image; i.e., the signal falling edge serves as an indicator for starting a new frame. The delay time (𝑇) can be calculated as follows: 1 𝑇= − 𝑇𝑒𝑥 , (4.1) 𝑓 𝑝𝑠 where 𝑓 𝑝𝑠 is the frame rate and 𝑇𝑒𝑥 is the sensor exposure time. The camera sensor is started with all initial parameters regarding the exposure time, frames per second, and digital and analog gain. An external function is created to monitor the camera signals and trigger the projector excitation. Figure 4.4.a shows a flow diagram of the recording and triggering algorithm. St is a status flag with values that range from 0 to R, and R is the ratio between the number of projector and LED frames. 𝑥 1 and 𝑦 1 are the delay times for the projector (Proj.) and LED, respectively, while 𝑥 2 and 𝑦 2 are the exposure times for the projector and the LED, respectively. Figure 4.4.b shows the timing diagram of the camera acquisition signal and the projector excitation signal produced by the acquisition platform. The triggering circuit follows the falling edge from the camera and alternates between the projector and the white LED illumination. Different pulse widths were used during the experiment to provide precise control over the sensor exposure because the SL projector and the white light source have different illumination powers. The synchronization circuit schematic is shown in Figure 4.4.c. The circuit has two switches that control the power supply to the LED and the SL projector. Both switches are controlled by the main control and acquisition board that 48 is connected to the sensor’s main camera. The control board follows the signal from the camera to trigger the synchronization circuit and change the status of the switches. (a) LED array 14.5 V Data acquisition Camera system Colored projector (b) (c) Figure 4.4: a) Synchronization algorithm, b) Implemented timing diagram, c) Schematic explaining the synchronization circuit. (a) (b) Figure 4.5: Concurrent view from inside the inspected pipe, a) LED assisted image, b) Structured light image. A concurrent view from inside the pipe is shown in Figure 4.5. Figure 4.5.a shows the LED- assisted RGB image which is used to directly inspect the pipe surface for color changes, cracks, and damages. Figure 4.5.b shows a structured light illuminated image that is used to reconstruct 3D profile of the inspected pipe surface, including any possible defects. During the experimental 49 trials, the acquisition is performed at a rate of 25 frames per second with a ratio of 1 to 4. In this configuration, the sensor produces a 5 fps LED camera inspection associated with a 20 fps structured light inspection to ensure a complete picture of the surface texture and a dense 3D reconstruction. 4.3 Sensor Calibration Sensor calibration is necessary to estimate the sensor’s internal parameters and achieve the accurate reconstruction of the inspected surface. The calibration parameters are the intrinsic camera parameters, the angle of each projected ring from the projector, and the relative position of the camera to the projector in the 3D world. The camera is calibrated first by using a fisheye camera model, [78] followed by calculating the projector intrinsic parameters and the stereo parameters. Camera calibration is performed first because the assistance of a calibrated camera is necessary for projector calibration since it cannot recognize the external world. The following calculations will explain the projector calibration process with the assumption of a calibrated camera with a known intrinsic matrix and corrected distortion. 𝑷 𝑽 𝑪 𝜃𝑇 Camera sensor (a) (b) Figure 4.6: a) A diagram of the triangulation process in the SL sensor, b) The fitting process of the projection cone. The triangulation process in the SL sensor is described in 4.6.a, where we have a camera with an optical center at 𝐶 and a projector with an optical center at 𝑉. The main goal of the triangulation process is to estimate the position of 𝑃, which can be estimated by calculating the intersection between the projected cone and the camera ray. 50 Each projected ring by the SL projector is assumed to be an acute cone with the main axis 𝜃 direction described by a unit vector (𝑨), a half projection angle (𝜃 = 2𝑇 ), and a vertex (𝑽) as shown in Figure 4.6.b. Each point 𝑃 that belongs to the cone must satisfy 𝑷−𝑽 𝑨· = cos 𝜃. (4.2) |𝑷 − 𝑽 | 𝑷, 𝑉 are the position vectors to the points 𝑃 and 𝑉. Similar convention is followed throughout the chapter, where the bold form of the point name represents the point position vector. From Equation 4.2, it can be noticed that surface points 𝑃 are needed to successfully estimate the projected cone parameters. The following subsections present the conventional calibration method and our new automatic calibration method and explain how the 3D point points are generated for each method. 4.3.1 Reference Plane Based Calibration (RPBC) The conventional method of calibrating a structured light sensor depends on using a reference plane with a predefined pattern to estimate the pose of the sensor by solving the Perspective-n-Point problem. The acquired orientation is utilized to generate 3D points from the projected pattern with ray-plane intersection. Therefore, the data collection process is tedious and needs to be performed in two steps for each collected orientation. In the first step, a flat calibration pattern is imaged with white ambient light to estimate the camera orientation. Circles [37] or ArUco patterns [65] can be used to estimate the camera orientation because they can be identified when they are partially obstructed (any pattern with the capability of detection with a partial view can be used for this purpose). The next step is to turn on the SL projector and capture a picture of the SL pattern that is projected on the calibration board while maintaining a fixed orientation for the board during the two steps. The 3D location of the projected rings is then calculated by intersecting the camera rays with the calibration plane. The two steps are repeated for each acquired orientation to create a set of 3D points (𝑃). To fit a conic surface to the generated points, the algorithm minimizes the distance (𝐷 𝐸𝑟𝑟 ). 𝐷 𝐸𝑟𝑟 is the shortest distance (perpendicular distance) from the calibration 3D point (𝑃) to the cone 51 surface and it is given by 𝐷 𝐸𝑟𝑟 = |𝑷 − 𝑽 | sin(𝜃 − 𝜃 𝑒 ). (4.3) Therefore, the minimization problem is given by ˜ 𝑨, ( 𝜃, ˜ = argmin(∥𝐷 𝐸𝑟𝑟 ∥ 2 ), ˜ 𝑉) (4.4) 2 ˜ 𝑨, where 𝜃, ˜ 𝑉˜ are the estimated values of 𝜃, 𝑨, and 𝑉 respectively. A similar convention is followed throughout the chapter, where Tilde is used to indicate the estimated value of the variable. 𝜃 𝑒 is the estimated cone half projection angle, and 𝜃 is the angle between 𝑽 𝑷 and 𝑨 and can be calculated by using Eq. 4.2. 𝜃 𝑉 𝑨 𝐶 𝑟𝑐 z-axis 𝜙 Figure 4.7: Autocalibration environment 4.3.2 Automatic Calibration In this subsection, we developed an automatic calibration algorithm to simplify the calibration process, reduce sensor downtime and ensure accurate calibration parameters before each inspection process. In this algorithm, a set of frames are collected at different orientations inside the pipe before each scanning process. The collected frames are then used to estimate the projector’s intrinsic parameters and the stereo parameters between the projector and the camera. The calibration frames can be collected from inside a calibration tube with a known diameter or a service pipe that is known 52 to be defect-free. Theoretically, any cylindrical pipe with prior knowledge of diameter would work without considering associated measurement uncertainties. The calibration environment is described in Figure 4.7, where a structured light sensor is enclosed by a cylinder with a radius 𝑟 Cyl along an arbitrary axis 𝑨Cyl . In this environment, without loss of generality, the camera is located at the origin (𝐶 = (0, 0, 0)) of the coordinate system, and the camera is pointing along the z-axis. The main goal here is to generate a set of 3D points to serve as an input to the calibration process; therefore, direct images with structured light patterns are acquired from inside a pipe with known diameter. The structured light pattern edges are segmented, decoded, and then converted to homogeneous coordinates to create a set of camera rays with known orientations. From Figure 4.7, we can notice the following: • A set of camera rays (𝒘) that originate from the origin of the coordinate system (𝐶) toward the cylinder walls. • A projected cone that originates from an arbitrary point (𝑉) behind the origin. • An infinite bounding cylinder with arbitrary orientation (𝑨Cyl ) and known radius (𝑟 Cyl ). • Intersection points between the camera ray, cone, and the bounding cylindrical surface (𝑃). The camera rays intersect with both the projected cone from the projector module and the surface of the bounding cylinder; therefore, the intersection points belong to both the cylinder and the cone surfaces. The intersection point between the camera ray and the cone surface can be calculated by substituting the ray equation into the cone equation. An infinite cylinder (𝑫 𝑎𝑟𝑏 ) with an arbitrary orientation can be described as a cylinder (𝑫) that is rotated by a rotation matrix (𝑹) and shifted by a translation vector (𝑻) 𝑫 arb = 𝑅𝐷 + 𝑇, (4.5) 𝑹 = 𝑅𝑥 𝑅 𝑦 𝑅 𝑧 , 53 ©1 0 0 ª ­ ® 𝑹𝑥 = ­0 cos (𝜙𝑥 ) − sin (𝜙𝑥 ) ® , ­ ® ­ ® ­ ® 0 sin (𝜙𝑥 ) cos (𝜙𝑥 ) « ¬   © cos 𝜙 𝑦 0 sin 𝜙 𝑦 ª ­ ® 𝑹𝑦 = ­ ­ ® 0 1 0 ®, ­ ® ­  ® − sin 𝜙 𝑦 0 cos 𝜙 𝑦 « ¬ ©cos (𝜙 𝑧 ) − sin (𝜙 𝑧 ) 0ª ­ ® 𝑹 𝑧 = ­ sin (𝜙 𝑧 ) cos (𝜙 𝑧 ) 0® . ­ ® ­ ® ­ ® 0 0 1 « ¬ 𝑹 is a 3 × 3 orthogonal matrix and 𝑻 is 3 × 1 column vector. 𝑹𝑥 , 𝑹 𝑦 , 𝑹 𝑧 are 3 × 3 rotation matrices describing the rotation around x, y, and z respectively. 𝜙𝑥 , 𝜙 𝑦 , 𝜙 𝑧 are the rotation angles around 𝑥, 𝑦, and 𝑧 respectively. Therefore, 𝑫 can be described by 𝑫 = 𝑅𝑇 (𝐷 arb − 𝑇), (4.6) and any point on the surface of 𝑫 satisfies √︃ 𝐷 2𝑥 + 𝐷 2𝑦 = 𝑟 Cyl , (4.7) where 𝐷 𝑥 and 𝐷 𝑦 are the 𝑥 and 𝑦 components of 𝑫. Since an infinite cylindrical surface is assumed, there is no need to estimate the rotation around the 𝑧-axis or the shift along the 𝑧- direction. Therefore, the number of parameters is reduced from six to four and the algorithm only needs to estimate 𝜙𝑥 , 𝜙 𝑦 , 𝑇𝑥 , and 𝑇𝑦 . To estimate the calibration parameters, we followed two approaches. The first approach is a direct approach where the algorithm minimizes the difference between the calibration pipe radius (𝑟 Cyl ) and the estimated radius (𝑟˜Cyl ) from the estimated points from the ray cone intersection. The optimization problem is given by ˜ 𝑨, ( 𝜃, ˜ 𝜙˜𝑥 , 𝜙˜𝑦 , 𝑇˜𝑥 , 𝑇˜𝑦 ) = argmin(∥(𝑟 Cyl − 𝑟˜Cyl ∥ 2 ), ˜ 𝑉, (4.8) 2 54 The ray-cone intersection process is explained in Appendix A.2, and the value of (𝑟˜Cyl ) is calculated by using Equation 4.7. This cost function provides a fast method for quickly calculating an initial estimate of the calibration parameters. Still, it cannot offer a unique solution because it is not aware of the error in estimating the z-coordinates and is only affected by the error in the radius of the estimated cylinder. Simulations showed that this method provides a fast estimate of the initial point if proper constraints for the V are used. Therefore, the method can accelerate the solution by calculating a point close to the minima to serve as an input for the second approach. In the second approach, the algorithm follows the optical path Camera-Cylinder-Cone-Camera. The image points (𝐷 𝑖𝑚 ) are converted to homogeneous coordinates first, and the ray-cylinder intersection algorithm (𝑅𝐶 𝑦𝐼) is used to find the intersection points (𝐷 𝑅𝐶 ) between the camera rays and the cylinder. A detailed description of the ray-cylinder intersection process is described in Appendix A.1. The cylindrical surface rays passing through the generated 3D points are then used to calculate the intersection points (𝐷 𝐶𝐶 ) between the cylinder and the projected cone. The cylinder surface rays (𝑹𝒂𝒚 Cyl ) are described by 𝑹𝒂𝒚 Cyl = 𝑫 𝑅𝐶 + 𝑠 𝑨Cyl , (4.9) where 𝑨Cyl is a unit vector describing the direction of the cylinder main axis and 𝑠 is an arbitrary scaler. A detailed description of the ray-cone intersection (𝑅𝐶𝑜𝐼) process is described in Appendix A.1. The estimated camera points (𝐷 𝐶 ) are then created by projecting 𝐷 𝐶𝐶 from the 3D space to image coordinates with a pinhole camera model (PHM). With both the input data and estimated data in the camera image domain, the minimization problem can be described by ˜ 𝑨, ( 𝜃, ˜ 𝜙˜𝑥 , 𝜙˜𝑦 , 𝑇˜𝑥 , 𝑇˜𝑦 ) = argmin(∥(∥𝐷 𝑖𝑚 ∥ 2 − ∥𝐷 𝐶 ∥ 2 )∥ 2 ). ˜ 𝑉, (4.10) 2 The function minimizes the difference in the distance from the image center because it is directly related to the difference in the projection angles between the projected and estimated cones ∥𝐷 𝑖𝑚 ∥ 2 (𝜃 = arctan ( 𝑓𝑐 )). A detailed description of the automatic calibration algorithm with a single ring projection is described in Algorithm 1. The operation CR in the algorithm refers to Equation 4.9, and 𝑁𝑜𝐹 refers to the number of calibration frames. 55 Algorithm 1 Automatic calibration algorithm with a single ring ˜ 𝑨, ( 𝜃, ˜ 𝜙˜𝑥 , 𝜙˜𝑦 , 𝑇˜𝑥 , 𝑇˜𝑦 ) = argmin(CostFunc) ˜ 𝑉, procedure CostFunc(𝜃, ˜ 𝑨, ˜ 𝑉, ˜ 𝜙˜𝑥 , 𝜙˜𝑦 , 𝑇˜𝑥 , 𝑇˜𝑦 ) for 𝑖 = 1 to 𝑁𝑜𝐹 do 𝐷 𝑅𝐶 =RCyI(𝜙˜𝑥 (𝑖), 𝜙˜𝑦 (𝑖), 𝑇˜𝑥 (𝑖), 𝑇˜𝑦 (𝑖), 𝐷 𝑖𝑚 ) 𝑅𝑎𝑦 Cyl =CR(𝐷 𝑅𝐶 , 𝜙˜𝑥 (𝑖), 𝜙˜𝑦 (𝑖)) 𝐷 𝐶𝐶 =RCoI(𝑅𝑎𝑦 Cyl , 𝜃, ˜ 𝑨, ˜ 𝑉)˜ 𝐷 𝐶 =PHM(𝐷 𝐶𝐶 ) 𝐸𝑟𝑟 (𝑖)= ∥𝐷 𝐶 ∥ 2 − ∥𝐷 𝑖𝑚 ∥ 2 end for return ∥𝐸𝑟𝑟 ∥ 22 end procedure A simulation environment was created to test the proposed autocalibration algorithm with specifications that are similar to the environment described in Figure 4.7. In the simulation, a set of 3D points (𝐷) are created from the intersection of the projected cone with a cylindrical pipe at different orientations. The 3D points are then converted to the image domain by using a pinhole camera that is given by ©𝑢 ª ­ ® 𝑆 ­𝑣 ® = 𝑲 [𝑟 |𝑡]𝐷 ­ ® ­ ® ­ ® 1 « ¬ (4.11) ©𝑥 ª © 𝑓𝑥 0 𝑐 𝑥 ª ©𝑟 11 𝑟 12 𝑟 13 𝑡1 ª ­ ® ­ ® ­ ® ­ ® ­𝑦® = ­ 0 𝑓 𝑦 𝑐 𝑦 ® ­𝑟 21 𝑟 22 𝑟 23 𝑡2 ® ­­ ®® ­ ® ­ ® ­ ® ­ ® ­𝑧® ­ ® ­ ® ­ ® 0 0 1 𝑟 31 𝑟 32 𝑟 33 𝑡3 ­ ® « ¬ « ¬ 1 « ¬ 𝑥, 𝑦, 𝑧 are the real-world coordinates of the point, 𝑢 and 𝑣 are the image coordinates of the point, 𝑆 is a scale factor, 𝑲 is the camera intrinsic matrix, 𝒓 is a rotation matrix, and 𝒕 is a translation vector. 𝑓𝑥 and 𝑓 𝑦 are the horizontal and vertical focal lengths, 𝑐 𝑥 and 𝑐 𝑦 are the coordinates of the image principal point in pixels, 𝑟𝑖, 𝑗 are the elements of the rotation matrix, and 𝑡𝑖 are the elements of the translation vector. The simulated model follows a similar specification to the sensor that is used in the experimental validation; therefore, the simulated camera sensor was set to have a width of 1664 and a height of 56 1232 pixels. The simulated camera has a focal length of 470 pixels which results in a diagonal field of view of 131 degrees. The pinhole model is applied to the data to calculate the image coordinates, and the calculated values are then rounded to the nearest integer pixel value to simulate the effect of a limited number of pixels in a digital camera sensor (assuming a worst-case scenario of no sub-pixel interpolation). Figure 4.8.a shows the simulation environment with a camera at the origin, a projector with parameters of Ring 1 in Table 4.1, and 3D points from the intersection of the projected cone with a cylinder with parameters from Trial 1 in Table 4.2. The 3D data are projected to the image plane with the pinhole model, and the resulted 2D image is shown in Figure 4.8.b. The process is repeated five times with different projection angles to produce a simulation of a multi-color multiring projector with five rings as shown in Figure 4.8.c. The simulated structured light sensor parameters are given in Table 4.1 which represents realistic parameters that were collected from reference plane-based calibration of a similar structured light sensor. The sensor is simulated with eight different orientations and positions inside the pipe, and they are explained in Table 4.2. 3D points 1200 Cone vertex 300 CAM origin 1000 y-axis (pixel) 800 200 z-axis (mm) 600 100 400 0 200 100 50 0 0 50 100 500 1000 1500 x-axis (mm) y-axis (mm) x-axis (pixel) (a) (b) 1200 1200 Ring 1 Ring 1 Ring 2 Ring 2 1000 Ring 3 1000 Ring 3 Ring 4 Ring 4 y-axis (pixel) y-axis (pixel) 800 Ring 5 800 Ring 5 600 600 400 400 200 200 500 1000 1500 500 1000 1500 x-axis (pixel) x-axis (pixel) (c) (d) Figure 4.8: a) 3D points from the intersection of the projected cone with the cylinder, b) Imaging of a single projected ring, c) Imaging of multicolor multiring pattern, d) Imaging of noisy multicolor multiring pattern. The data from the simulation is fed to the autocalibration algorithm, and each projected ring is 57 Table 4.1: Sensor parameters for the simulation environment. # 𝜃𝑇 (°) 𝐴𝑥 𝐴𝑦 𝑉𝑥 (mm) 𝑉𝑦 (mm) 𝑉𝑧 (mm) 1 37.893 -0.016 0.015 1.106 -1.211 -56.503 2 42.382 -0.016 0.014 1.283 -0.940 -56.490 3 47.473 -0.017 0.014 1.436 -1.065 -55.994 4 52.829 -0.013 0.011 0.785 0.339 -54.579 5 57.745 -0.015 0.011 0.986 0.103 -54.244 Table 4.2: Simulation parameters. Trial # 𝜙𝑥 (°) 𝜙 𝑦 (°) 𝑇𝑥 (mm) 𝑇𝑦 (mm) 1 11.46 11.46 12.7 12.7 2 -11.46 -11.46 -12.7 -12.7 3 11.46 -11.46 12.7 -12.7 4 -11.46 11.46 - 12.7 12.7 5 0 13.75 0 15.24 6 0 -13.75 0 -15.24 7 13.75 0 15.24 0 8 -13.75 0 - 15.24 0 fitted separately. In this scheme, all the parameters from the cone and the sensor orientation inside the pipe are optimized at the same time. The results of the calibration with eight frames are shown in Table 4.3. Comparing the estimated parameters in Table 4.3 to the simulated parameters in Table 4.1 indicates a convergence of the algorithm toward the actual sensor parameters. The values of the estimated projected angles are within 0.019 degrees (𝜎 = 0.01 degrees) of the projected angles, and the position of the cone vertex was estimated within 0.125 mm (𝜎 = 0.04 mm) of the actual parameters. Table 4.3: Calibration parameters from autocalibration algorithm. # 𝜃𝑇 (°) 𝐴𝑥 𝐴𝑦 𝑉𝑥 (mm) 𝑉𝑦 (mm) 𝑉𝑧 (mm) 1 37.891 -0.014 0.0120 1.103 -1.229 -56.554 2 42.363 -0.015 0.0110 1.297 -0.939 -56.615 3 47.459 -0.014 0.0110 1.433 -1.042 -56.064 4 52.811 -0.014 0.0120 0.787 0.3290 -54.65 5 57.748 -0.014 0.0110 0.982 0.0920 -54.236 To further improve the performance of the algorithm, data from all the rings are optimized at the same time. While optimizing all the rings together increases the dimensionality of the problem, 58 it also adds more constraints since each projected ring has a different and unique intersection with the calibration pipe. A detailed description of the autocalibration with multiple rings is shown in Algorithm 2, where 𝑁𝑜𝑅 refers to the number of calibrated rings. Algorithm 2 Automatic calibration algorithm with multiple rings ˜ 𝑨, ( 𝜃, ˜ 𝜙˜𝑥 , 𝜙˜𝑦 , 𝑇˜𝑥 , 𝑇˜𝑦 ) = argmin(CostFunc), ˜ 𝑉, procedure CostFunc(𝜃, ˜ 𝑨, ˜ 𝑉, ˜ 𝜙˜𝑥 , 𝜙˜𝑦 , 𝑇˜𝑥 , 𝑇˜𝑦 ) for 𝑖 = 1 to 𝑁𝑜𝐹 do for 𝑖𝑖 = 1 to 𝑁𝑜𝑅 do 𝐷 𝑅𝐶 =RCyI(𝜙˜𝑥 (𝑖), 𝜙˜𝑦 (𝑖), 𝑇˜𝑥 (𝑖), 𝑇˜𝑦 (𝑖), 𝐷 im ) 𝑅𝑎𝑦 Cyl =CR(𝐷 𝑅𝐶 , 𝜙˜𝑥 (𝑖), 𝜙˜𝑦 (𝑖)) 𝐷 𝐶𝐶 =RCoI(𝑅𝑎𝑦 Cyl , 𝜃˜ (𝑖𝑖), 𝑨(𝑖𝑖), ˜ 𝑉˜ (𝑖𝑖)) 𝐷 𝐶 =PHM(𝐷 𝐶𝐶 ) 𝐸𝑟𝑟 (𝑖, 𝑖𝑖)= ∥𝐷 𝐶 ∥ 2 − ∥𝐷 im ∥ 2 end for end for return ∥𝐸𝑟𝑟 ∥ 22 end procedure 8 35 Single ring Single ring 7 Multiple rings 30 Multiple rings 6 25 RMSE (pixels) RMSE (pixels) 5 20 4 15 3 10 2 1 5 0 0 0 2 4 6 8 10 12 14 16 0 2 4 6 8 10 12 14 16 Number of calibration frames Number of calibration frames (a) (b) Figure 4.9: a) Calibration error with no projection noise, b) Calibration error with noisy projection. A comparison of the performance of the single ring and multiple ring approaches is shown in Figure 4.9.a, which shows the RMSE value of the calibration error against the number of calibration frames. The comparison shows that the multiple ring approach requires fewer frames to converge toward the actual calibration parameters. To test the robustness of the algorithm, the calibration process was repeated in a noisy envi- ronment. The intensity across a light stripe projected on a Lambertian surface follows a gaussian model [79]; therefore, additive white gaussian noise with zero mean and 0.325 degrees standard 59 deviation was added to the projection angle of each projected cone. A simulated image captured with the added noise is shown in Figure 4.8.d where we notice the noisy nature of the extract stripes. The results of the calibration process with 16 noisy calibration frames are shown in Table 4.4. The table indicates a convergence of the algorithm toward the actual sensor parameters in Table 4.1. The values of the estimated projected angles are within 0.123 degrees (𝜎 = 0.0288 degrees) of the projected angles, and the position of the cone vertex was estimated within 0.47 mm (𝜎 = 0.0390 mm) of the actual parameters with an error RMS value of 4.15 pixels. A comparison of the performance of the single ring and multiple ring approach is shown in Figure 4.9.b. The multiple ring approach continues to have faster convergence when compared to the single ring approach due to the increased number of constraints. The results also indicate a slower convergence for both algorithms when compared to the noiseless case, which indicates the need for a larger number of calibration frames when the noise in the system is increased. Table 4.4: Calibration parameters from autocalibration algorithm with noisy projection. # 𝜃𝑇 (°) 𝐴𝑥 𝐴𝑦 𝑉𝑥 (mm) 𝑉𝑦 (mm) 𝑉𝑧 (mm) 1 37.8370 -0.0150 0.0120 1.1740 -1.2810 -56.9460 2 42.3040 -0.0150 0.0120 1.3720 -1.0220 -56.9500 3 47.3690 -0.0150 0.0120 1.5430 -1.1700 -56.4660 4 52.7080 -0.0150 0.0120 0.9120 0.2240 -55.0190 5 57.6220 -0.0150 0.0130 1.1300 -0.1020 -54.6150 4.3.3 Correction of Artifacts The calibration models in both RPBC and the autocalibration approach assume that the projected pattern is represented a by smooth cone resulting from a circle or an ellipse in the printed pattern. However, there are some cases where the projected pattern does not completely follow the conic model because there can be some artifacts from the printing process as shown in Figure 4.10.a. Therefore, in this section, a second calibration layer is developed to account for the artifacts in the projected pattern. This step depends on the first autocalibration layer (Algorithm 2) to estimate the main cone parameters and the orientation of the sensor inside the calibration cylinder. 𝜙𝑥 , 𝜙 𝑦 , 𝑇𝑥 , 𝑇𝑦 60 for each calibration frame are estimated during the calculation of the conic parameters (Algorithm 2); therefore, a new set of 3D points (𝑃ray ) are created with the ray-cylinder intersection process. The new 3D set is then used to calculate a specific angle for each projected ray on the projected ring. Therefore, the parameters can be estimated by minimizing the error between the intersection of the camera ray with the estimated projection cone (𝑃est ) and the generated points from the autocalibration process (𝐷 𝑅𝐶 ). 1200 Ring 1 Ring 2 1000 Ring 3 Ring 4 y-axis (pixel) 800 Ring 5 600 400 200 500 1000 1500 x-axis (pixel) (a) (b) Figure 4.10: a) Experimental projected pattern with highlighted artifacts, b) Simulated image of a multicolor multiring pattern with artifacts. The new layer assumes that each projected ray belongs to an independent cone, and 𝜃 𝜙 and 𝑉𝜙 are calculated for each angle in the projected ring, i.e., each projected cone is divided into 720 projection rays that cover 360 degrees. During the optimization process, all the points are assumed to have the same main axis 𝑨 and only 𝜃 𝜙 and 𝑉𝜙 are optimized. Since all the points on the projected ring are assumed to have the same main cone axis, the calculation of 𝑉 can be simplified into only calculating the magnitude of the shift (𝑠𝑉 ) along the cone main axis. Therefore, the new cone vertex 𝑉𝜙 is given by 𝑽𝜙 = 𝑽 + 𝑠𝑉 𝑨. (4.12) This formulation reduces the number of optimized parameters from four to two and provides a more stable solution than estimating all the parameters of the cone. Therefore, the minimization can be described by ( 𝜃˜𝜙 , 𝑠˜𝑉 ) = argmin(∥𝑷est − 𝑫 𝑹𝑪 ∥ 22 ), (4.13) 61 A detailed description of the finetuning algorithm is given in Algorithm 3. Algorithm 3 Fine-tuning algorithm for 𝑖 = 1 to 720 do ( 𝜃˜𝜙 (𝑖), 𝑠˜𝑉 (𝑖)) = argmin(CostFunc) end for procedure CostFunc(𝜃˜𝜙 (𝑖), 𝑠˜𝑉 (𝑖)) 𝑽˜ 𝜙 = 𝑽 + 𝑠˜𝑉 (𝑖) 𝑨 𝐷 𝐶𝐶 =RCoI(𝐷 im , 𝜃˜𝜙 (𝑖), 𝑨, 𝑽˜ 𝜙 ) 𝐸𝑟𝑟= ∥𝐷 𝑅𝐶 − 𝐷 𝐶𝐶 ∥ 2 return ∥𝐸𝑟𝑟 ∥ 22 end procedure Table 4.5: Error in estimating the calibration pipe radius in millimeters. Method Ring 1 Ring 2 Ring 3 Ring 4 Ring 5 AutoCal. multiple 2.7606 1.9341 1.4421 1.1320 0.9408 Fine tuning 0.7776 0.5198 0.3823 0.2752 0.2222 To test the performance of the proposed approach, a pattern with random artifacts was simulated. The artifacts were created by adopting the noise pattern in Section 4.3.2 (𝑁 𝑃1) after applying a moving average to smooth the noise pattern and scaling it to control the artifacts size. The new noise pattern (𝑁 𝑃2) is given by 𝑁+𝜙−1 𝑠𝑛 ∑︁ 𝑁 𝑃2(𝜙) = 𝑁 𝑃1(𝑖), (4.14) 𝑁 𝑖=𝜙 where 𝑁 is the length of the filter and 𝑠𝑛 is a scaling factor. 𝑁 was set 10 and 𝑠𝑛 was set to 3 to create a pattern with similar artifacts to the pattern in Figure 4.10.a. A sample picture from the simulated images is shown in Figure 4.10.b. The RMSE value of the error in estimating the value of the radius for all the rings was 1.768 mm with the parameters from the autocalibration process. The large error value is related to the fact that the calibration model does not account for the artifacts in the projected ring. The error value was then reduced to 0.4792 mm after applying the fine-tuning algorithm. With this final stage, it is worth noting that not all the rings have the same error characteristics. The largest ring (Ring 5) provides better spatial resolution than the smaller rings (Ring 1, for example). These 62 error characteristics are related to the fact that the smaller rings are projected farther away from the sensor when compared to Ring 5 and are captured within a smaller area in the imaging sensor, i.e., a smaller number of pixels is dedicated to the smaller rings. Table 4.5 presents a detailed description of the error for each ring before the artifacts correction stage (AutoCal. multiple) and after the artifact correction stage (Fine-tuning). 4.4 Experimental Validation In this section, the performance of the autocalibration algorithm is tested under different scenarios with experimental data. RPBC calibration is performed first to provide a benchmark. Next, calibration is performed with the proposed autocalibration algorithm using single and multiple rings approaches. Finally, the fine-tuning process is applied to reduce the effect of the artifacts. 4.4.1 Conventional Calibration The RPBC calibration process was performed by collecting 21 calibration pairs at different orien- tations while keeping a calibration board in the field of view of the camera. The calibration board is used to provide a reference to estimate the sensor position in the 3D space during the calibration process. Aruco [80] pattern was used in this experiment to provide the orientation with a partial or obstructed view of the board pattern. The calibration pairs are used to create the required 3D points for the calibration process by calculating the ray-plane intersection between the camera rays and the calibration board. For each orientation, the data of each projected ring are separated and combined with the data from the other orientation to create a unique 3D point cloud for each projected ring. The parameters of each projected cone are estimated separately because each projected cone experiences a different amount of refraction through the glass tube, as explained in Figure 4.11. In the 2D schematic, the refraction causes the position of the cone vertex to shift in the negative z-direction, and the shift increases with the decrease of the projection angle. In 3D coordinates, the diffraction also affects the position of the 𝑥 and 𝑦 coordinates of the cone vertex; therefore, each projected ring is assumed 63 to have a unique separate center of projection. Pipe wall Glass tube P C 𝑉𝑧1 𝑉𝑧2 𝑉𝑧 Figure 4.11: Effect of refraction through the connecting tube. The results of the sensor calibration for five rings with the conventional calibration process are shown in Table 4.6. The table shows the parameters of five rings with projection angles that range from 37.8 to 57.7 degrees. The camera is assumed to be at the origin, and the projector is placed approximately 60 mm behind the camera. The table also shows that the values of 𝜃𝑇 increase with the increase of the ring number, which is consistent with the projection pattern design, where ring number 1 refers to the smallest ring in the pattern with the smallest projection angle. The slight shift in the z-component of the vertex (𝑉𝑧 ) is related to the refraction experienced by the projected cone, which increases with the decrease of the projection angle. The results in Table 4.6 show inconsistent change with abrupt increments and decrements in the absolute value of 𝑉𝑧 , which is assumed to be decreasing with the increase of the projection angle. One of the reasons that affect the calibration accuracy is the existence of outliers; therefore, random sample consensus (RANSAC) was applied. RANSAC is an iterative method that estimates the model parameters in the existence of outliers by separating them from inliers with repeated random sub-sampling [81]. The results after applying RANSAC are shown in Table 4.7. The results show a continuous reduction in the absolute value of 𝑉𝑧 that is associated with the increase of the projected angle, which is consistent with the effect of refraction. 64 Table 4.6: Calibration parameters from the RPBC calibration procedure. # 𝜃𝑇 (°) 𝐴𝑥 𝐴𝑦 𝑉𝑥 (mm) 𝑉𝑦 (mm) 𝑉𝑧 (mm) 1 37.839 -0.045 -0.015 -0.666 1.490 -60.090 2 42.348 -0.047 -0.016 -0.474 1.727 -60.268 3 47.478 -0.047 -0.018 -0.735 2.047 -59.630 4 52.942 -0.049 -0.023 -1.025 2.876 -58.612 5 57.761 -0.051 -0.021 -0.939 2.837 -58.803 Table 4.7: Calibration parameters from the RPBC calibration procedure after applying RANSAC. # 𝜃𝑇 (°) 𝐴𝑥 𝐴𝑦 𝑉𝑥 (mm) 𝑉𝑦 (mm) 𝑉𝑧 (mm) 1 37.7610 -0.0450 -0.0160 -0.6950 1.6850 -60.3940 2 42.331 -0.0480 -0.0160 -0.4440 1.7530 -60.3240 3 47.462 -0.0480 -0.0170 -0.6180 1.8840 -59.6770 4 52.685 -0.0520 -0.0220 -0.6100 2.6950 -59.2630 5 57.674 -0.0520 -0.0200 -0.7570 2.6730 -58.9690 4.4.2 Autocalibration with Multiple Rings Projection For the autocalibration algorithm, the experimental data were collected inside a cylindrical cali- bration pipe with a 6-inch (152.4mm) diameter. The data collection was performed by placing the sensor at different orientations inside the pipe to provide variations in the tilt and position inside the pipe for both 𝑥 and 𝑦 directions. The data were acquired over a period of 52.9 seconds at a rate of 15 frames per second which resulted in 809 frames. The data were then downsampled, and only 40 frames were used during the calibration process by selecting a single frame from every 20 acquired frames. A sample set of seven calibration images is shown in Figure 4.12. The images were acquired with the sensor tilted up and down, rotated to the right and left, and positioned in the middle respectively. The rest of the collected images were combinations of these 5 cases with different degrees of tilt and rotation. The acquired images are segmented to extract the edges of the projected rings and then fed to the autocalibration algorithm. 4.4.2.1 Auto Calibration with a Single-Ring Projection The evaluation process starts by testing the performance of the algorithm with single ring projection, which produces a similar environment to single ring laser scanners. The calibration process is 65 (a) (b) (c) (d) (e) Figure 4.12: Captured frames from inside a 6-inch PVC pipe with the sensor: a) Tilted down, b) Tilted up, c) Rotated to the right, d) Rotated to the left, e) In the middle of the pipe pointing in the forward direction. performed separately for each projected ring as explained in Algorithm 1, and the corresponding calibration results are shown in Table 4.8. The results show that the algorithm converged to slightly different values when compared to the RPBC results in Table 4.7. It can also be observed that the values of 𝑉𝑧 are not decreasing with the increase of the projection angle, which is contrary to the expected effect from the refraction of light through the glass tube. The effect of these irregularities will be discussed in the numerical performance evaluation section. Table 4.8: Calibration parameters of autocalibration with single-ring. # 𝜃𝑇 (°) 𝐴𝑥 𝐴𝑦 𝑉𝑥 (mm) 𝑉𝑦 (mm) 𝑉𝑧 (mm) 1 39.6480 -0.0630 -0.0010 1.7520 -0.1390 -56.2920 2 44.1930 -0.0590 0.0050 0.2900 -1.9040 -58.0350 3 49.0710 -0.0640 -0.0010 0.9790 -0.5760 -58.2900 4 53.8150 -0.0730 -0.0210 2.1150 3.3480 -59.5100 5 59.0970 -0.0660 -0.0250 0.4840 3.9320 -58.6190 4.4.2.2 Autocalibration with Multiring Projection This subsection evaluates the performance of the algorithm in calibrating a multiring structured light sensor where all the rings are calibrated at the same time to provide more geometric constraints. Data from all the rings are provided together to the autocalibration algorithm and the calibration results are shown in Table 4.9. The results indicate a convergence of the algorithm toward the accurate sensor parameters where the absolute value of 𝑉𝑧 is decreasing with the increase of the 66 projection angle due to the effect of diffraction. There are small differences in the values of the estimated parameters when compared to the RPBC calibration method, and their effect will be discussed in the numerical performance evaluation section. Table 4.9: Calibration parameters of autocalibration with multiple rings. # 𝜃𝑇 (°) 𝐴𝑥 𝐴𝑦 𝑉𝑥 (mm) 𝑉𝑦 (mm) 𝑉𝑧 (mm) 1 39.3758 -0.0603 -0.0037 0.3038 0.0257 -58.0744 2 44.2221 -0.0612 -0.0045 0.3210 0.2605 -58.0049 3 49.2467 -0.0628 -0.0053 0.3810 0.4358 -57.5808 4 54.4102 -0.0653 -0.0094 0.3000 0.9246 -57.4966 5 59.5816 -0.0680 -0.0091 0.6183 1.0643 -57.0901 4.4.3 Correction of Artifacts This subsection experimentally validates the effect of using the fine-tuning process on the experi- mental data. The generated data from the multiple ring autocalibration process is provided as input. Each projected ring is divided into 720 projection angles, and each angle is calibrated separately. A graphical comparison showing the reconstruction of a single frame before and after applying the fine-tuning process is shown in Figure 4.13. The figure indicates a reduction in the artifacts in the projected rings when using the fine-tuned parameters. XY plane XY plane 200 200 180 180 160 160 z-axis(mm) z-axis(mm) 140 140 120 120 100 100 80 80 60 60 40 40 -50 0 50 -50 0 50 -50 0 50 -50 0 50 x-axis(mm) y-axis(mm) x-axis(mm) y-axis(mm) (a) (b) Figure 4.13: Reconstruction of a single calibration frame with parameters from a) Multiring autocalibration, b) Fine-tuning. 67 4.4.4 Performance Evaluation To numerically quantify the accuracy of the proposed method, each calibration frame is recon- structed with the calibration parameters, and then the cylindrical fitting is used to estimate the pipe rotation (𝑹𝑒𝑠𝑡 ) and translation (𝑻𝑒𝑠𝑡 ). With known pipe translation and rotation, each reconstructed point orientation can be corrected as follows: 𝑫 𝑐 = 𝑅𝑇𝑒𝑠𝑡 (𝐷 𝑜 − 𝑇𝑒𝑠𝑡 ), (4.15) where 𝑫 𝑜 represents the original reconstructed data, and 𝑫 𝑐 represents the corrected data with a center around the origin and main axis along the z-axis. A top view of the 3D point cloud of five reconstructed rings from a single frame after performing orientation correction is shown in Figure 4.14.a. A similar top view after combining all the 40 calibration frames is shown in Figure 4.14.b. Ring 1 Ring 2 Ring 3 Ring 4 Ring 5 (a) Ring 1 Ring 2 Ring 3 Ring 4 Ring 5 (b) Figure 4.14: Reconstructed rings after orientation correction, a) Single frame reconstruction, b) 40 frames combined reconstruction. 68 The performance is then evaluated by calculating the error in estimating the radius of the calibration pipe as follows: √︃ 𝑟 𝑒𝑠𝑡 = 𝐷 2𝑐𝑥 + 𝐷 2𝑐 𝑦 , (4.16) √︄ Í𝑁 𝑖=1 (𝑟 𝑒𝑠𝑡𝑖 − 𝑟 Cyl ) 2 𝑅𝑀𝑆𝐸 = , (4.17) 𝑁 where 𝐷 𝑐𝑥 and 𝐷 𝑐 𝑦 are the 𝑥 and 𝑦 components of 𝐷 𝑐 , and 𝑟 Cyl is the radius of the calibration pipe. A comparison of the error from RPBC, autocalibration with multiple rings (AutoCal. multiple), autocalibration with a single ring (AutoCal. single), and the fine-tuning stage is shown in Figure 4.15. The figure shows a better radius estimation error for the autocalibration algorithm (single and multiple) when compared to the RPBC method. This can be related to multiple error sources in the data collection process for RPBC and this includes but is not limited to errors in the measurement of the pattern dimensions, deformation in the calibration board, and possible movement of the sensor or the calibration board when switching between the white light and structured light projector. The table also shows that the multiple ring approach provides better performance than the single ring approach. The increase in performance can be attributed to the increased number of constraints which provides a better estimation of the sensor orientation inside the pipe. 5 RPBC AutoCal. multiple 4 AutoCal. single Fine-tuning RMSE (mm) 3 2 1 0 1 2 3 4 5 Ring number Figure 4.15: Error in estimating the radius of the calibration pipe in millimeters. The fine-tuning process also resulted in further reduction in the RMS value of the error and improved the sensor performance. The error from the fine-tuning stage indicates that the projected rings have an RMS value of error that ranges from 0.344 to 1.12 mm, depending on the radius of 69 the projected ring in the projected pattern. One anomaly can be noticed in Ring 5 where it has a slightly higher error than its neighboring smaller ring (Ring 5). This anomaly can be related to some fabrication errors where the slit width between colored stripes is wider for ring 5 (black slit between the two red stripes). The larger slit width results in increased ambiguity in calculating the center of the slit when compared to the other rings with narrower slits. 4.5 Conclusion In this chapter, an innovative SL sensor with sequential RGB-D acquisition for the inspection of small-diameter pipelines is presented. The sensor provides a sub-millimeter 3D profiling of the inspected surface in addition to RGB images for the detection of color changes on the surface of the inspected pipe. In addition to the sensor design, the chapter mainly addresses the projector calibration problem by introducing an automatic calibration algorithm to calculate the intrinsic parameters of the projection module and the stereo parameters between the camera and the projector. The cylindrical nature of the inspected pipe is exploited to provide a set of geometric constraints that are used to calculate the sensor parameters. The algorithm was validated by using simulations and experimental data from the structured light sensor, and the results indicated the ability to converge to the accurate sensor parameters with different noise levels. The results also show that the algorithm can be used to calibrate a sensor with a single ring projection, which means that the algorithm can be used to calibrate laser scanners with single ring projections. The chapter also presents an artifacts correction approach to account for the defects in the projected pattern that can be caused by the pattern fabrication process. 70 CHAPTER 5 STEREOSCOPIC ENDOSCOPIC SL SENSING FOR 360 DEGREE INSPECTION 5.1 Introduction Most endoscopic laser and structured light of sensors follow the same principle by having a conic projection module assembled in a sequential manner with a camera. This system configuration provides a simple triangulation system when the triangulation is converted from 3D to 2D by exploiting the cylindrical symmetry of the system and assuming that both the camera and the projector are sharing the same optical axis. More versatile triangulation approaches can always be used by using the camera ray intersection with the projected cone to account for any manufacturing anomalies. On the other side, this system configuration suffers from multiple drawbacks, and they are 1)The shadowing effect from the camera or projector wires or holder, which can result in an incomplete reconstruction of the inspected wall surface [82, 83, 31], 2) The increase of the length of the sensor due to the sequential nature of the sensor assembly which reduce the system ability to maneuver through banded cylindrical geometries (elbow joints for example)[84, 85, 86], 3) Attenuation and refraction of the projected pattern when projected through the glass tube [37], and 4) Reduced sensor rigidity due to the use of glass tubes to connect the camera to the projector [65]. To reduce the effect of wires, [87, 84] place the projection module in the front of the inspection system to reduce the number of connections and then use small gauge wires to power the laser module. While this approach reduces the effect of wires, it does not mitigate it. Part of the forward sensor field of view is also lost because it is blocked by the projector, which means it is difficult to use the camera for robotic navigation. The laser facing the camera can also cause some distortion if the laser is not properly blocked from the camera. The wires can also be coated on the glass tube where transparent conductive coating (indium tin oxide, for example) traces can be coated on the surface of the glass tube via sputtering. While this solution reduces the effect of the shadowing, 71 the design is complicated since sputtering is needed to coat the connections on a cylindrical glass surface. In addition to the slightly increased light absorption through the glass [88], this system continues to suffer from the other drawbacks of the sequential SL system. Another solution to compensate for these drawbacks is to use a side-by-side configuration where the camera and the projector are placed beside each other, as shown in Figure 5.1.c. The disadvantage of this approach is the instability of the solution when the camera ray is tangent to the projected cone or intersects with the cone at only a single point where the problem becomes ill-posed. The error in these systems depends on the inspection angle. The solution is not attainable when the camera ray is tangent to the projected cone due to the unavoidable existence of noise in any digital acquisition system. Therefore, side-by-side configuration systems provide damage detection capability but do not provide geometric 3D profiling [89, 90, 30]. These systems do not perform 3D triangulation and are only limited to monitoring the changes in the shape of the projected ring to detect the existence of defects. This chapter presents a novel solution to the instability problem in side-by-side systems by adding another camera in an orientation perpendicular to the first camera’s axis. This system configuration provides additional geometric constraints to stabilize the solution process and reduces the solution’s ambiguity if one of the cameras has a tangential ray to the projected cone. The second camera orientation guarantees at least one camera ray at any time that is not tangential to the cone. We are also presenting a reconstruction algorithm that combines the data from both of the cameras to reconstruct the profile of the inspected surface. The rest of the chapter is organized as follows, Section I presents the sensor design and the derivation of the reconstruction algorithm, Section II presents the calibration process, and the conclusion is presented in Section III. 5.2 SL Sensor Design Endoscopic structured light and laser sensor assume a conic projection of the SL and laser rings to illuminate the inspected surface and imaged by the sensor camera. Triangulation in sequential and side by side sensor configurations is shown in Figure 5.1.a and in Figure 5.1.b. 72 C P θ θ P C Camera sensor Camera sensor (a) (b) Figure 5.1: a) Schematic of the triangulation in an endoscopic SL sensor with sequential configura- tion, b) Schematic of the triangulation in an endoscopic SL sensor with side by side configuration. 𝑃 and 𝐶 are the optical centers of the projector and camera, respectively. The inspected surface dimensions are found by calculating the intersection points between the camera rays and the projected cones. Every detected ring point in the camera image is associated with a ray (𝒘) that intersects with the projected cone. For a general case, the position vector (𝑷𝑥 ) of the intersection point (𝑃𝑥 ) can be described by 𝑷𝒙 = 𝑪 + 𝑠𝒘. (5.1) 𝐶 is the camera optical center, and 𝑪 is its position vector. Normal and bold fonts will be used to represent the point, and its position vector thought the chapter. The projected cone is described by 𝑷𝒙 − 𝑷 · 𝑨 = cos 𝜃. (5.2) |𝑷𝒙 − 𝑷| By combining Eq. .10 and Eq. .11 and simplifying the equations, the required parameters to solve the equation are given by 𝑎 = (𝒘 · 𝑨) 2 − cos2 𝜃, (5.3)   𝑏 = 2 (𝒘 · 𝑨) (𝑷𝑪 · 𝑨) − 𝒘 · 𝑷𝑪 cos2 𝜃 , (5.4) 𝑐 = (𝑷𝑪 · 𝑨) 2 − (𝑷𝑪 · 𝑷𝑪) cos2 𝜃. (5.5) The value of 𝑠 can be calculated as follows: √ −𝑏 ∓ 𝑏 2 − 4𝑎𝑐 𝑠= . (5.6) 2𝑎 73 The calculated value of 𝑠 is substituted in Eq. .10 to calculate the position of the point of intersection. The intersection point can be tested according to the value of 𝛿, which is given by √︁ 𝛿= 𝑏 2 − 4𝑎𝑐 (5.7) • If 𝛿 < 0, the ray has no intersection with the cone. • If 𝛿 = 0, the ray intersects with the cone at only a single point. • If 𝛿 > 0, the ray intersects with the cone at two points. The lower value of 𝛿 indicates a lower possibility for the camera ray to intersect with the cone. 𝑃2 𝑃1 𝑃2 𝐶 𝑃1 𝑃 𝑃 𝐶 (a) (b) Figure 5.2: The intersection of the camera ray with the projected cone, a) Concentric camera and projector, b) Side by side camera and projector. Figure 5.2 explains the intersection of the camera rays with the projected cones. 𝐶 and 𝑃 are the optical centers of the camera and projector, respectively. For the sequential sensor where the camera is enclosed by the projected cone, the camera ray intersects twice with the conic surface as shown in Figure 5.2.a. One intersection point is in front of the camera (𝑃2 ), and the other one is behind the camera (𝑃1 ). The only valid intersection is the point in front of the camera (𝑃2 ) since the camera is looking only in the forward direction. For the side-by-side configuration, a similar procedure can be followed to find the intersections points when the intersecting ray is a nontangential ray to the cone. The main difference from the sequential sensor is that the camera ray intersections are both in front of the camera, as shown in 74 Figure 5.2.b. The only valid solution is the farthest point from the camera (𝑃2 ) because 𝑃1 is a virtual point that cannot be seen by the camera. (a) (b) Figure 5.3: The intersection of the camera ray with the projected cone, a) Camera ray in the same plane as the line connecting the camera and the projector and intersecting at two points, b) Camera ray tangent to the projected cone and intersecting with it at a single point. Aside from the general solution, there are two special cases: • Case 1: The camera ray is in the same plane as the vector connecting the camera and the projector as shown in Figure 5.3.a. This case is similar to the sensor triangulation of sequential sensors shown in Figure 5.1.a. In this case the triangulation is simplified from 3D to 2D and the intersection can be calculated with simple trigonometric formulations. The trigonometric formulations for each angle 𝜙 are described as follows: 𝜌 𝜌 tan 𝜃 𝑐 = , tan 𝜃 𝑃 = . (5.8) 𝑍 𝑑+𝑍 𝑑 tan(𝜃 𝑃 ) 𝑍= , 𝜌 = 𝑍 tan(𝜃 𝐶 ). (5.9) tan(𝜃 𝐶 ) − tan(𝜃 𝑃 ) 𝜃 𝑃 is the half projection angle (𝜃 𝑃 = 2𝜃 ), 𝜃 𝐶 is the angle of the camera ray, 𝑑 is the distance between the projector and the camera. This case is valid in only two points, and they correspond to the two maxima of 𝛿 ; therefore, it is difficult to generalize the formulations to the other points in the cone. • Case 2: the camera ray is tangential to the cone and intersects with the cone at a single point, as shown in Figure 5.3.b. Theoretically, this case is only valid when the value of 𝛿 is equal to zero, which is represented by two lines on the projected cone. In practice, noise exists and causes deviation in the position of the camera ray and causes ambiguity in 75 differentiating between the cases of intersection and no intersection in the vicinity of these points. Therefore, the reconstruction problem becomes ill-posed where any slight change in the direction of the camera ray can lead to a significant shift in the position of the intersection point. Therefore, the threshold for intersection is increased from zero to Δ to account for this ambiguity. Δ is a scalar with a value dependent on the noise level. The result is a partial reconstruction of the projected cone surface where only parts of the surface with 𝛿 values larger than Δ are reconstructable. In this chapter, we use an additional camera to create a plane that is perpendicular to the original camera projector plane. A cad model of the proposed sensor is shown in Figure 5.4.a. With the proposed stereo configuration, the sensor guarantees that there is at least one camera ray not tangential to the projected cone and enable full reconstruction of the cone surface. Projector Cameras (a) (b) Figure 5.4: a) CAD model of the proposed sensor, b) Schematic of the simulated sensor. A simulation environment was created to validate the proposed approach with a single projector and two cameras with the orientation shown in Figure 5.4.b. C1 and C2 represent the first and second cameras, and P represents the projection module. The position and orientation of the projector and the cameras are given in Table 5.1. The projected cone is intersected with a plane at z=3 inches to create a 3D circle with a radius of 1.5 inches, as shown in Figure 5.5.a. The 3D circle is then projected to the imaging planes of the two cameras, and the generated images are shown in Figure 5.5.b and Figure 5.5.c. In the backward model (reconstruction algorithm), there is a need to calculate the intersection between the camera ray and the projected cone to project the points back to their position in the 3D space. The direct intersections of the camera rays from the two cameras with the cone surface are 76 Table 5.1: Positions and orientations of the projector and cameras in the simulation environment. Position Orientation C1 (0.5,0,0) (0,0,1) C2 (0,0.5,0) (0,0,1) P (0,0,0) (0,0,1) (a) (b) (c) Figure 5.5: Simulation results, a) 3D circle from the intersection of the projected cone with a plate at z=3, b) Image of the 3D circle from the first camera (C1), c) Image of the 3D circle from the second camera (C2). (a) (b) (c) (d) (e) (f) (g) (h) (i) Figure 5.6: a) Reconstruction from CAM1 data, b) Reconstruction from CAM2 data, c) Merged reconstruction from the two cameras, d) Reconstruction from noisy CAM1 data, e) Reconstruction from noisy CAM2 data, f) Merged reconstruction from the noisy data, g) Delta of reconstruction from CAM1, h) Delta of reconstruction from CAM2, i) Merged reconstruction from the noisy data with delta=0.04 shown in Figure 5.6.a and Figure 5.6.b. The figures show that there are errors in the reconstruction of edges where the camera rays are tangent to the cone as expected. To compensate for that, data with a small value of 𝛿 are removed from the point cloud and then data from both of the cameras are merged as shown in Figure 5.6.c. To test the effect of error in locating the edge of the projected ring, random noise was added to the radius of the data in the camera images. The reconstruction 77 with noisy data is shown in Figure 5.6.d and Figure 5.6.e. It can be noticed that the range of non-reconstructable angles has increased due to the effect of noise. The surface of the projected circle continues to be reconstructable by merging the results as shown in Figure 5.6.f. To explain the merging of the criteria the delta values of the reconstruction from Camera1 and Camera2 are shown in Figure 5.6.g and Figure 5.6.h respectively. The volatile regions where there is a failure in the reconstruction are associated with low values of 𝛿. As mentioned in the line cone intersection, a 𝛿 with zero represents a case where there is a tangent ray that intersects with the cone at a single point. Therefore, a threshold was set to exclude all the volatile regions. The merging with Δ = 0.04 is shown in Figure 5.6.i. To validate the proposed approach experimentally a demonstration prototype was fabricated. Front and side views of the fabricated sensor are shown in Figure 5.7.a and 5.7.b. The cameras use fish eye lenses to provide a maximum field of view (170 degrees), while the projector uses a slightly narrower field of view to guarantee that projected rings are in the field of view of the camera (120 degrees). The sensor side length (H) is 33.2 mm which results in diameter (D) of 46.95. Using the proposed scheme results in a sensor with 2.6 times the diameter and 0.46 times the sensor length of a similar sequential sensor that uses the same diameters for the cameras and the projector [65]. Although the new sensor has a larger diameter, the shorter length allows for better maneuverability of the sensor in elbow joints. The design was further modified to exploit the nature of the camera sensor and guarantee that the projection ring is always within the field of view of the camera; therefore, the horizontal line of each camera was oriented to be passing through the origin of the center of projection. A picture of the sensor with the rotated cameras is shown in 5.7.c. Projector Camera Projector Cameras L (a) (b) (c) Figure 5.7: Sensor fabrication, a) Front view, b) Side view, c) Front view of the sensor with the rotated cameras. 78 Sample stereo pair from the sensor inside a 4-inch pipe is shown in Figure 5.8.a. The figure shows stereo images with structured light patterns in the field of view of the two cameras and no shadowing from the camera connection wires. The images show that Camera 2 image is rotated by approximately -90 degrees to provide the required reconstruction condition. Another stereo pair from the sensor inside a 6-inch PVC is shown in Figure 5.8.b. The figure shows that the SL pattern is within the field of view of the cameras but is more shifted toward the center of the field of view due to the increased pipe size. (a) (b) Figure 5.8: Raw stereo sensor images from, a ) 3-inch PVC pipe, b) 6-inch PVC pipe. 5.3 Sensor Calibration The calibration parameters are needed to have a successful triangulation of the projected rings in the 3D space. The task is divided into 3 sub-tasks, camera intrinsic parameters, cameras stereo parameters, and projector intrinsic and extrinsic parameters. • Camera calibration: camera calibration is a well-studied problem that has been covered extensively in the literature to calculate the internal parameters of the camera sensor [91]. The calibration mainly involves calculating the camera intrinsic matrix and the radial and tangential distortion parameters. For the proposed sensor, we follow the fisheye camera calibration model proposed Scaramuzza et. al [78] to compensate for the strong radial distortion from using fisheye camera lenses. The internal parameters can be calculated by imaging a set of calibration points on a flat plane to create a set of 3D points. With known image points and world points coordinates, the parameters are calculated according to Scaramuzza model. 79 • Projector calibration: projector calibration mainly involves the calculation of the projected cone parameters. Each projected ring is assumed to be an acute cone as explained in Section 5.2. The parameters include the projection angle, main axis, and the cone vertex. The camera is used to calculate the projector parameters because the projector cannot see the external world; therefore, the orientation of the camera with respect to the projector is calculated at the same time of calculating the projector’s internal parameters. To calibrate the projector, we follow a similar procedure to the method followed in [92]. Each camera projector pair is treated as a separate sequential sensor with a single camera and projector and only one camera is used to calibrate the projector at a time. The sensor is inserted inside a cylindrical pipe with a known diameter to create a constrained environment for calibration. The calculated data is then used to inversely calculate the projector parameters and the orientation of each frame. Δ was set to 30% of the maximum value of 𝛿 to ensure a minimum number of outliers in the calibration process. • Stereo parameters: in this step a transformation (𝑇𝐶2 𝐶1 ) is calculated to relate the two cameras in the 3D space. The transformation consists of a rotation matrix to describe the camera rotation and a translation vector that compensates for the translation between the two cameras. Camera 1 is used as the center of the measurement system and therefore, the transformation will convert Camera 2 data to Camera1 coordinate system. The transformation is described as follows: 𝐷 𝐶21 = 𝑇𝐶2𝐶1 𝐷 (5.10) 𝐶2    𝑅11 𝑅12 𝑅13 𝑇1      𝑅 𝑅 𝑅 𝑇  𝐶1 =  21 22 23 2  𝑇𝐶2   (5.11) 𝑅   31 𝑅32 𝑅33 𝑇3       0 0 0 1   𝐷 𝐶2 is Camera 2 data in the Camera 2 coordinate system, 𝑇𝐶2 𝐶1 is the transformation matrix that relates camera 2 to Camera 1 coordinate system, and 𝐷 𝐶21 is Camera 2 data in Camera 80 1 coordinate system. 𝑅𝑖 𝑗 are the component of the rotation matrix and 𝑇𝑖 𝑗 are the element of the translation matrix. With two calibrated cameras, perspective-n-points (PnP) model can be used to estimate the position of the two cameras in the 3D space. A calibration board with known surface pattern is placed in the shared field of view of the two cameras to estimate the position of the cameras in relative to the board. The PnP model produces two transformations (𝑇𝐶1 𝑂 , 𝑇 𝑂 ) that relate the cameras to the world 𝐶2 coordinates system (Calibration board coordinates system). Therefore, the transformation between the two cameras is calculated as follows 𝐶1 = 𝑇 𝑂 𝑇 𝐶1 , 𝑇𝐶2 (5.12) 𝐶2 𝑂 where 𝑇𝑂𝐶1 is the inverse transformation of 𝑇𝐶1𝑂 . For experimental calibration each of the camera is calibrated separately with the projector to simplify the calibration process. Both of the calibration show similar values for the projection angles that range from 56 to 99 degrees. The calibration of the projector with Camera1 and Camera2 are shown in Table 5.2 and Table 5.3. The orientations of the cameras in the 3D world are shown in Table 5.4. Similar to design parameters, the calibration results show that the second camera is rotated by 88 degrees and the cameras are placed at distances of approximately 15 mm from the projector. Table 5.2: Projector calibration results from Camera 1. # 𝑉𝑥 (mm) 𝑉𝑦 (mm) 𝑉𝑧 (mm) 𝜃 (◦) 𝐴𝑥 𝐴𝑦 1 -16.7285 -0.1566 -1.7120 56.458 0.0265 0.0021 2 -16.8924 -0.1934 -1.9813 63.551 0.0307 0.0026 3 -17.3053 -0.2878 -1.3781 70.996 0.0359 0.0044 4 -16.7792 -0.0816 -2.9095 77.069 0.0342 0.0021 5 -17.3869 -0.1896 -2.0114 84.677 0.0406 0.0040 6 -17.5744 -0.1553 -1.7845 92.011 0.0437 0.0039 7 -17.5291 -0.0003 -2.1345 99.069 0.0447 0.0024 A single frame reconstruction from the sensor is shown in Figure 5.9. The data was acquired from the inspection of a 3- inch PVC pipe with delta values kept above 40% of the maximum value 81 Table 5.3: Projector calibration results from Camera 2. # 𝑉𝑥 (mm) 𝑉𝑦 (mm) 𝑉𝑧 (mm) 𝜃 (◦) 𝐴𝑥 𝐴𝑦 1 14.7812 0.7304 -2.1246 56.606 -0.0089 -0.0138 2 14.8069 0.7717 -2.3079 63.735 -0.0085 -0.0152 3 14.7247 1.0491 -1.5728 71.288 -0.0061 -0.0184 4 15.1417 0.5250 -3.0219 77.538 -0.0119 -0.0153 5 15.0601 1.0224 -2.1750 85.096 -0.0103 -0.0206 6 15.0846 1.2275 -1.8572 92.655 -0.0089 -0.0240 7 15.2217 1.2706 -2.1507 99.896 -0.0091 -0.0271 Table 5.4: Camera1 and Camera2 orientations in the 3D word. # 𝑂 𝑐 (mm) 𝑂 𝑐 (mm) 𝑂 𝑐 (mm) 𝜙𝑥 ( ◦ ) 𝜙 𝑦 (◦) 𝜙𝑧 (◦) 1 0 0 0 0 0 0 2 -16.2253 -15.2014 1.4238 0.0752 0.8771 88.1706 for delta. Figure 5.9.a shows the data from camera number 1 and Figure 5.9 shows the reconstructed data from camera number 2, while Figure 5.9.c shows the combined reconstruction from the two cameras. The results show a cylindrical Reconstruction from the two cameras, which is expected due to the nature of the inspected surface. The combined results show a 360-degree reconstruction of the pipe surface with small misalignment on the edges of the combined frames. This misalignment is mainly related to the errors during the estimation of the stereo parameters. Despite the slight misalignment, full coverage of the pipe surface is still valid, and the misalignment can be corrected during the registration process. To measure the accuracy of the sensor, the reconstructed frame was fitted to a cylindrical surface to estimated the sensor alignment inside the pipe. A top view of the aligned and centered frame is shown in Figure 5.9.d. An inverse transformation is then applied to center the reconstruction data around the origin and align it along the z-direction. With aligned data, the radius of each point can be easily calculated by calculating the distance from each point to the origin of the xy plane. The root mean square value of the error (𝑅𝑀𝑆𝐸) of estimating the pipe radius is then calculated as follows: √︄ Í𝑁 𝑖=1 (𝑟 𝑒𝑠𝑡𝑖 − 𝑟 Cal ) 2 𝑅𝑀𝑆𝐸 = , (5.13) 𝑁 where 𝑟 𝑒𝑠𝑡𝑖 is the estimated radius of each point and 𝑟𝐶 𝑎𝑙 is the radius of the 3-inch pipe (38.1 82 mm). The root mean square error of estimating the radius from all the rings is 0.314 mm, and the individual error from each projected ring is shown in 5.9.e. Each ring has different error characteristics because the error is dependent on multiple factors like the distance from the sensor, the accuracy of finding the edge, and camera distortion on the edges of the frame. (a) (b) (c) (d) 0.45 0.4 RMSE (mm) 0.35 0.3 0.25 0.2 1 2 3 4 5 6 7 Ring number (e) Figure 5.9: Reconstruction of a single frame after calibration, a) Reconstruction of the CAM1 data, b) Reconstruction of CAM2 data, c) Combined Reconstruction from the two cameras, d) Combined data after alignment correction, e) RMSE plot for the seven individual rings. 5.4 Conclusion This chapter presents a new SL sensor that utilizes a stereoscopic configuration to address the drawbacks of current SL and laser endoscopic sensors. The new sensor addresses the shadowing effect from the camera wires and refraction from connecting glass and improves the sensor rigidity. The sensor addresses the instability of the side-by-side configuration by introducing a geometrical constraint from a second camera with an orientation perpendicular to the first camera’s main axis. The paper introduces a reconstruction algorithm to merge the data from the two cameras and a calibration procedure to estimate the intrinsic parameters of the sensor. The proposed sensor and the reconstruction were validated with simulations and experimental work. The results show improved sensor rigidity, reduced sensor length, and 360-degree coverage with 3D profiling capability. 83 CHAPTER 6 MOVEMENT ENHANCED SL SENSING WITH MOTION INDUCED PHASE SHIFTING 6.1 Introduction Movement enhanced phase measurement profilometry exploits the sensor movement in the vicinity of the inspected object to create a phase shift on the surface of the inspected object that is used to extract its 3D profile. In this scheme, the object movement in front of the inspection sensor is used to create the phase shift required for the 3D reconstruction. The system only requires a static pattern which means a small projector size because a slide projector can be easily deployed. It also does not require the inspected object to be static during the inspection process and that makes it suitable for pipe inline inspection purposes where the sensor is in continuous movement along the pipe. The idea of an MPMP was proposed by Yoneyama et al. in 2003 where a static pattern is to illuminate an object on a conveyer belt and multiple line sensors are used to monitor the light intensity changes and extract the object shape, therefore, object movement can substitute the phase-shifting device to calculate the phase distribution along the recording line [93]. Wu et al. proposed an MPMP based algorithm that uses pixel matching to estimate the position of the inspected object and correct it to match the MPMP requirement [93]. Deng et al. proposed the use of telecentric lenses to reduce the error from perspective geometry to inspect PCBs on fabrication lines [94]. Lu et al. proposed the use of markers to estimate object translation and rotation and then regularization is then used to correct the shape of the inspected object [95]. Deng et al. also proposed the use of a regularization algorithm with perspective lenses to reduce the constraints from the large size of the telecentric lenses [96]. Yuan et al. proposed an MPMP for a rectilinear moving object by image correction [97]. Chen et al. proposed an algorithm to perform MPMP for an object moving along a straight line [98]. They used markers to track the object’s movement and improve the registration process. Peng et al. proposed the use of a pattern with orthogonal two-frequency grating to achieve high resolution and improve the phase unwrapping process[99]. 84 In this chapter, we are presenting an MPMP based inspection endoscopic sensor. The sensor projects a pure sinusoidal pattern to facilitate the projector fabrication where defocusing is used to generate the sinusoidal pattern. The structured light sensor is synchronized with an LED array to perform sequential 2D-3D acquisition. The main tasks can be summarized as follows: • Design an MPMP sensor to suit the cylindrical environment inside the inspected pipe • Miniaturize the SL system to be inserted inside an 8-inch pipe • Exploit stereo setup to reduce the effect of perspective geometry on the 3D reconstruction without the need for regularization Figure 6.1: System diagram. 6.2 Movement Based Digital fringe Projection A phase-based system consists of a camera and a projector that projects a sinusoidal fringe on a scanned object. The 3D measurements of the object are obtained by comparing the phase on the scanned object surface with the phase on a predefined reference plane. Figure 6.1 explains the general system setup. The figure shows a projector (P) and a camera (C) that are separated by a distance B and at a distance (𝐿) from a reference plane. 𝑆 is a 3D point on the inspected surface, and 𝑍 is the distance from 𝑆 to the reference plane. With the assumption that 𝑍 is much smaller than 𝐿, the relation can be written as: 𝐿 𝐿 𝑍≈ 𝑑 = 𝐾Δ𝜙 (6.1) 𝐵 𝐵 85 Where 𝐾 is a system constant that relates to Δ𝜙 on the reference plane. Where Δ𝜙 is the difference between the phase on the reference plane and the scanned surface. Therefore an accurate and precise phase retrieval for each scanned point represents the key for a successful 3D scanning. The intensity of each pixel in the acquired image is given by: 𝐼 = 𝐼 ′ + 𝐼 ′′ cos (𝜙(𝑥, ℎ)) (6.2) 𝐼 is the intensity of the camera pixel, 𝐼 ′ is the ambient light intensity in addition to the DC component in the projected pattern, 𝐼 ′′ represents the modulated signal intensity and 𝜙 is the phase on the scanned object [55]. There are two main methods for 𝜙 retrieval [45]. The first method projects a single pattern and applies Fourier transform to the captured image to separate all the signal components. A bandpass filter is then applied to separate the modulated signal and convert it back by inverse Fourier transform [100]. The phase is then evaluated by extracting the imaginary component after taking the logarithm of the inverted data. This method is fast and simple and requires only a single frame for reconstruction which means the real-time acquisition of the data is achievable. The drawbacks of this method appear in the reconstruction of sharp edges and holes that cause sharp changes in the carrier frequency. Another issue is the effect of the ambient light that makes it difficult to separate the modulated carrier. The second method is to project a sequence of patterns to reconstruct the phase from every single pixel separately. The system projects a sequence of N phase-shifted patterns with a phase difference of 𝜃 between every two consecutive frames. Where 𝜃 in most of the algorithms equals 2𝜋/𝑁 or 2𝜋/(𝑁 + 1) to provide a more averaging effect. From those N phase-shifted patterns, the phase of the imaged points can be estimated and calculated. The phase is then subtracted from the phase of a known reference plane to calculate the depth of the scanned object in a similar way to the Fourier transform-based method. In a multi-shot PMP system, the projected pattern is shifted while the scanning platform is fixed. The pattern is shifted digitally to create a precise shift in the phase of the pattern projected on the surface of the inspected object. The intensity of the projected pattern on the scanned object 86 is given by:   2𝑛𝜋 𝐼𝑛 = 𝐼 ′ + 𝐼 ′′ cos 𝜙+ , 𝑛 = 0, 1, 2 . . . 𝑁 − 1, (6.3) 𝑁 where 𝐼𝑛 is the intensity of the camera pixel for shifted fringe, 𝐼 ′ is the ambient light intensity, and 𝐼 ′′ is the modulation signal intensity. One of the popular schemes is to project four frames with a phase shift of 2𝜋/4 between each of them and can be represented as follows [101] 𝐼1 = 𝐼 ′ + 𝐼 ′′ cos(𝜙 + 0) = 𝐼 ′ + 𝐼 ′′ cos(𝜙) 𝐼2 = 𝐼 ′ + 𝐼 ′′ cos(𝜙 + 𝜋/2) = 𝐼 ′ + 𝐼 ′′ sin(𝜙) (6.4) 𝐼3 = 𝐼 ′ + 𝐼 ′′ cos(𝜙 + 𝜋) = 𝐼 ′ − 𝐼 ′′ cos(𝜙) 𝐼4 = 𝐼 ′ + 𝐼 ′′ cos(𝜙 + 3𝜋/2) = 𝐼 ′ − 𝐼 ′′ sin(𝜙) By assuming a constant ambient light during the scanning process, the phase (𝜙) measured from each imaged point is given by:   𝐼2 − 𝐼4 𝜙 (𝑥, 𝑦) = arctan . (6.5) 𝐼1 − 𝐼3 The Arctan function produces a wrapped phase that ranges between −𝜋 and 𝜋 (𝜙 = Φ𝑚𝑜𝑑 (2))). The absolute phase is then calculated by adding multiple of 2𝜋 to the calculated phase in a process called phase unwrapping [102]. In the MPMP, the scanning system moves along the pipe’s main axis while projecting a static fringe pattern on the pipe’s inner wall. This movement causes the projected pattern to be shifted by a distance 𝑑𝑥 resulting in a phase shift (Δ𝜙) that is dependent on the height of the scanned object. In this system, each frame is given by: 𝐼𝑛 = 𝐼 ′ + 𝐼 ′′ cos (𝜙 + 𝜃 (𝑑𝑥, ℎ)), (6.6) where 𝜃 (𝑠𝑛 , ℎ) is the phase shift created by the movement of the scanner for a distance of 𝑑𝑥. The height-dependent phase results in an oscillatory pattern on the surface of the scanned object that degrades the quality of the reconstruction. A simulation environment is prepared to provide rapid testing capabilities under different testing conditions. To simulate the experimental setup, a 3D model of the Stanford bunny is moved in 87 front of the camera and the projector. Three frames are used for the reconstruction of the scene with −3𝜋/2, zero, and 3𝜋/2 phase shift as shown in Figure 6.2.a to Figure 6.2.c. (a) (b) (c) (d) (e) Figure 6.2: Simulated 3D scanning. a) I1, b) I2, c) I3, d) Phase unwrapping, e) 3D reconstruction. The phase is calculated by shifting the captured images. 𝐼1 is shifted by eighty-six pixels backward, 𝐼3 is shifted eighty-six pixels forward, and 𝐼2 is cropped to the same size as 𝐼1 and 𝐼3 . The phase calculation is performed by using Equation 6.5 and the results are shown in Figure 6.2.d. The final 3D rendering results after phase unwrapping and applying the external and internal parameters is shown in Figure 6.2.e. 6.3 Oscillation Reduction by Optimization To simplify the notation, we choose to analyze a geometry with a projector and a camera that are separated by a distance 𝐵 and both of them are looking toward a flat plane at a distance of 𝐿. An object with height 𝑍 is placed on the plane. In this system, each frame is given by 𝐼𝑛 = 𝐼 ′ + 𝐼 ′′ cos(𝜙 + 𝜃 (𝑠𝑛 , ℎ)), (6.7) 88 where 𝜃 (𝑠𝑛 , ℎ) is the phase shift created by the movement of the scanner for a distance of 𝑠𝑛 . To simplify the notation, the equation can be rewritten with 𝜙 and 𝜃 separated as follows 𝐼𝑛 = 𝐼 ′ + 𝐼 ′′ cos(𝜙 + 𝜃 (𝑠𝑛 , ℎ)). (6.8) In a matrix form this set of equations is written as   1  cos(𝜃 (ℎ, 𝑠1 )) − sin(𝜃 (ℎ, 𝑠1 ))    1 cos(𝜃 (ℎ, 𝑠2 )) − sin(𝜃 (ℎ, 𝑠2 ))  𝑀=  , (6.9) 1  cos(𝜃 (ℎ, 𝑠3 )) − sin(𝜃 (ℎ, 𝑠3 ))      1 cos(𝜃 (ℎ, 𝑠4 )) − sin(𝜃 (ℎ, 𝑠4 ))       𝐼1  𝐼′         𝐼     2   𝐼 =   , 𝑉 =  𝐼 ′′ cos(𝜙)  , (6.10) 𝐼     3  ′′       𝐼 sin(𝜙)   𝐼4      𝐼 = 𝑀𝑉 . (6.11) In more general way the phase can be extracted by solving min||𝑀𝑉 − 𝐼 || 2 , (6.12) which in a closed form can be written as 𝑉 = (𝑀 𝑇 𝑀) −1 𝑀 𝑇 𝐼, (6.13) and the required phase is given by: 𝜙 = tan−1 (𝑉 (3)/𝑉 (2)). (6.14) After performing phase unwrapping to calculate the absolute phase, the phase change is calculated by subtracting the phase of the reference plane. From the phase difference, the height is directly calculated as follows: 𝐿 𝐿 𝑍= 𝑑 = 𝐾Δ𝜙, (6.15) 𝐵 𝐵 89 where 𝐾 is a system constant that relates to Δ𝜙 on the reference plane. In this system, the only known components are the intensities of the pixels (𝐼𝑛 ), the displacement of each frame, and the system parameters. The main unknowns in this system are the phase shift and phase on the surface. The matrix is well-conditioned, but the solution is unique within a phase shift of less than one period. Therefore surface continuity is assumed to be able to unwrap the phase correctly. Another restriction that we are facing is that we need to fit the points sequentially one by one to guarantee a correct phase estimation because the point estimation is not valid if we fail to reconstruct the previous points. Solving the system of linear equations with a constant phase shift that is calculated from the height of the calibration plane and the movement of the system of the object results in reconstruction with the oscillatory pattern on the surface as mentioned previously. Therefore this estimated height is used as the initial value to our optimization problem. From this height, the phase shift 𝜃 is calculated by 𝜃 (𝑆, ℎ) = 𝐾 𝑠/ℎ. Therefore the task now is only to optimize the problem (min||𝜃 − 𝐾ℎ𝑠 ||). The problem is solved iteratively to minimize the error between the calculated and estimated phase shift to reconstruct the exact object height. We chose to simulate a geometry with two cameras that are separated by a distance of 0.1 and both of them are looking toward a flat plane at distance of 1.5. Two objects with a height of 0.1 and 0.01 are placed on the plane. Four phase shifted images are taken by the system with their intensity as shown in Figure 6.3. The height profile of the scanned objects is shown in Figure 6.4.a. A direct reconstruction of this height profile by solving the system of linear equations is shown in Figure 6.4.b. When comparing the actual and reconstructed profiles, we can see that there is a fluctuating pattern on the surface of the reconstructed objects. This pattern is a result of the nonuniform phase shift that is dependent on the heights of the objects. This initial height estimation is passed to the iterative optimization algorithm to estimate the correct object height. In this noise-free data, the oscillatory pattern was corrected and an exact reconstruction has been achieved as shown in Figure 6.4.c. To evaluate the robustness of the method, the reconstruction method was evaluated with different noise levels added to the input data. Figure 6.5 shows the performance of the signal to noise ratio equal to 50, 10, and 5. 90 Figure 6.3: Four scanning images with 𝜋/4 phase shift. (a) (b) (c) Figure 6.4: a) Simulated height profile, b) Reconstruction with the direct formulation, c) Optimized solution. (a) (b) (c) Figure 6.5: Different reconstruction results with signal to noise ratio equal to: a) 50, b) 10, c) 5. 6.4 Correction by Using Stereo We opted to use an active stereo approach to avoid these oscillations. The depth information is extracted by triangulating the disparity in the position of the observed objects in the two camera frames. Active stereo vision also eliminates the need for projector calibration by incorporating an additional stereo camera. This type of system uses the projector to provide the texture to the scanned 91 object surface while the matching process depends on the disparity between the two cameras only. I.e., there is no need to know the projector’s intrinsic and extrinsic parameters. The system only requires a stereo camera calibration to calculate the relation between the two stereo cameras and their intrinsic parameters, which is a relatively easy process when compared to projector calibration. This framework also eliminates the requirement for linear phase projection and uniform scanned surface color. The stereo cameras are used to acquire an initial rough estimation of the imaged surface, and then the phase is used to refine the initial 3D structure of the inspected object. In this framework, we assume a unique wrapped phase within 2𝜋. 6.5 Simulation Study A simulation environment was created to validate the algorithm. The platform consists of two stereo cameras and a projector that projects a sinusoidal pattern on a specially designed 3D object. CAD software was used to create a flat surface with material loss defects to serve as a testing bench for the algorithm. The defects have cylindrical shapes with diameters that are decreasing with the increase of depth. The structured light system is moved in front of the scanned object and four frames are acquired at distances that are multiples of the λ/4. λ represents the wavelength of the projected sinusoid on the reference plane. Image sequences from the left and right cameras are shown in Figure 6.6.a and Figure 6.6.b. The acquired images are registered by cropping a multiple of twenty-six pixels from each image (assuming a constant speed of 26 pixels for the reference plane). A DC image without the effect of the sinusoid pattern can be acquired by summing the four registered images (𝐼 𝐷𝐶 = (𝐼1 + 𝐼2 + 𝐼3 + 𝐼4 )/4). The DC values of the acquired left and right images are shown in Figure 6.6.c and Figure 6.6.d. The DC images serve as an input for stereo comparison where stereo disparity maps are acquired by using a block matching algorithm [21]. The initial disparity map from the block matching is shown in Figure 6.6.e. The stereo algorithm can only match the edges of the artificial grooves due to their unique features while the flat surfaces are not reconstructable. The flat surfaces were filled by using the average value of the matched points. The initial disparity map serves as a seed point to refine the final disparity map. The phase 92 Figure 6.6: a) Images sequence from the left camera, b) Images sequence from the right camera, c) Left DC image, d) Right DC image, e) Disparity map with block matching, f) Phase matching between the right and left cameras, g) Refined disparity map, h) Refined disparity map after interpolation (3D view). from each camera is extracted according to Equation 6.5. The arctan function produces a wrapped phase that ranges from −𝜋 to +𝜋. By using the geometric constraints, we assume that the phase is unique within the search window. In this scheme, for each point in the right camera, we look for a matching phase value in the left camera (within the search window) after adding the values from the stereo matching. Figure 6.6.f shows the matching process of a point in the right camera to the phase on the left camera after adding values from the stereo matching. Here we notice that the point (Ref. point) lies approximately within twenty pixels of the matching phase on the other camera. In this step, we find the point with the least difference from the matched point and correct the original map by adding the difference to the original estimated value. A 2D representation of the refined disparity map is shown in Figure 6.6.g. The refinement process results in filling the gaps in the disparity map and correcting the mismatched points. The disparity map generated 93 from this process results in a map with pixel-level resolution. To further improve the resolution, interpolation is used. The reference image remains intact, while the second image is interpolated to provide more accurate points for matching. The results after using the interpolation can be seen in Figure 6.6.h. 6.6 Effect of Nonlinear Phase Projection One of the problems that we faced during the experimental trials was the nonuniform phase distribution related to the barrel distortion from the projection lens (Wide projection is needed to cover large percentages of the two cameras’ field of view). This type of distortion results in a non- uniform phase shift across the acquired frame. Therefore, the reconstructed phase from the acquired image will not increase linearly. We notice the nonlinear behavior of the calculated wrapped phase which will be reflected as a periodic pattern on the reconstructed surface. To replicate this effect, we re-simulated the object with the nonlinear pattern shown in Figure 6.7.a. A sample image from the simulated sequence is shown in Figure 6.7.b. The DC component of the frames from the right camera is shown in Figure 6.7.c. We notice that the DC image was reconstructed but not completely due to residues from the projected pattern. Similar to the case with the uniform phase, the DC images are used to calculate the stereo disparity images and the results are shown in Figure 6.8.a. A comparison between the phase in the case of uniform and non-uniform phase projection is shown in Figure 6.8.b. The final reconstruction with the nonuniform distribution can be seen in Figure 6.8.c and Figure 6.8.d. The 3D reconstruction shows that reconstruction was not affected by the nonuniform nature of the projected pattern. This is because the stereo reconstruction does not depend on the relation between the pixels within a single frame but depends on comparing the pixels from two different cameras that are capturing the same scene. 6.7 Features Extracted From the Sensors The structured light sensor can produce 3 components in addition to the disparity map: • DC component (𝐼0 ): the intensity of light reflected from the DC component of the projected 94 Figure 6.7: a) Nonlinear projected pattern, b) Sample image with nonlinear pattern projection Stereo pair with nonlinear projection, c) The average frame from the right camera. Figure 6.8: 3D reconstruction with nonlinear projection: a) Disparity map with block matching, b) Effect of nonuniform phase distribution, c) 2D representation of the interpolated refined surface, d) 3D representation of the interpolated refined surface. fringes in addition to ambient light. The sensor works in a dark environment therefore, the ambient light effect is neglected. Each projected fringe pattern is given by 𝐼𝑖 = 𝐼0 + 𝐼𝑚 sin (𝜙 + 𝜃). (6.16) With four frames and constant ambient lighting, the DC image is given by 𝐷𝐶Image = 𝐼0 = (𝐼 1 + 𝐼2 + 𝐼3 + 𝐼4 )/4. (6.17) • AC component (𝐼𝑚 ): the intensity of light reflected from the sample due to the AC component in the projected sinusoidal fringes is given by 𝐴𝐶Image = Γ𝐼𝑚 1 √︃ √ (6.18) = √ (𝐼3 − 𝐼1 ) 2 + (𝐼4 − 𝐼2 ) 2 = 8𝐼𝑚 8 • Modulation ratio: the ratio between the AC and DC values is given by 𝐴𝐶Image 𝛾Image = . (6.19) 𝐷𝐶Image 95 It is worth noting that the intensity of the images captured by the camera is equal to Γ𝐼𝑖 , where Γ is a constant that accounts for the surface reflectivity. Γ was dropped here to simplify the notation. 6.8 SL Sensor Design 6.8.1 Rectangular Geometry Sensor The sensor is a direct implementation of the simulated structured light sensor. A picture of the sensor explaining its main components is shown in Figure 6.8.a. The left and right cameras are labeled by 1 and 2 respectively and the projector is labeled by 3. The structured light system components are connected by a 3D printed structure to produce a device with two stereo cameras that are sharing the same baseline and are supported by a structured light illumination source. A slide projector is used to reduce the system complexity. The slide consists of vertical black and white stripes that are converted to a sinusoidal pattern by defocusing the projection lens. The projection lens, in this case, works as a low pass filter that eliminates most of the high order components in the projected square wave pattern. The sensor was used to inspect a corroded sample with a rectangular slit defect. The defect has a maximum depth of 1.5 mm and a length of 15 mm. The inspection results are shown in Figure 6.8.b to Figure 6.8.d. The DC and AC images clearly show the defect because they are direct 2D pictures of the inspected defect under different light illumination. The ratio image shows no new information which is expected due to the similarity between the AC and DC images. The disparity image shows that the 3D profile of the inspected defect was reconstructed successfully. The darker parts of the disparity map represent the object points that are farther away from the inspection sensor which is what we are seeing for the internal parts of the defect. 6.8.2 Cylindrical Geometry Sensor Three main schematics were considered for the cylindrical system design and they are shown in Figure 6.10.a to Figure 6.10.c. 96 Figure 6.9: a) Rectangular geometry sensor and projection pattern, b) DC, c) AC, d) Ratio, and e) Disparity map. Figure 6.10: Proposed cylindrical systems: a) Model A, b) Model B, c) Model C. 6.8.2.1 Model A The design consists of a camera and a projector pointing in the same direction. The projector is chosen to have a relatively narrow field of view (76 degrees in our design) to project the image in front of the camera while the camera is chosen to have a much wider field of view (170 degrees in our design). Rectangular structured light setup can be modified to scan the cylindrical geometry inside the pipe by placing a light module in series with a camera that is pointing in the same direction as shown in Figure 6.11.a. The triangulation process in the geometry at a specific angle (𝜃) is explained in Figure 6.11.b. Here, c is the camera, p is the projector, the 3D point is the intersection point, f is the focal length, d is the distance between the projector and camera, and r is the position of the point on the image plane. For a general camera model, it is defined by 𝑓 𝑝 /𝑍 = 𝑟 𝑝 /𝑋 and 𝑓𝑐 /(𝑧 − 𝑑) = 𝑟 𝑐 /𝑋. By combining the two equations, the coordinate of the point 97 Figure 6.11: a) Schematic of model A inside the pipe, b) Triangulation of the system in cylindrical coordinates. Figure 6.12: a) Projection of uniform pattern inside a pipe, b) Proposed phase nonlinear phase. is extracted as: 𝑑 𝑟𝑝 𝑍 𝑍= and 𝑋 = (6.20) 𝑓𝑐 𝑟 𝑝 𝑓𝑝 1− 𝑓 𝑝 𝑟𝑐 Projecting a pattern with a linearly changing phase results in a nonlinearly distributed phase on the pipe internal walls as shown in Figure 6.12.a. The nonlinear distribution arises from the nonlinear relation between the distance from the camera and the change in the camera coordinates according to the camera equation. Nonlinear distribution of the phase along z causes different phase changes for each projected point when the system moves inside the pipe. Therefore, the phase change relationship in the rectangular domain will not valid. This problem is mitigated by projecting a pattern with nonlinearly distributed phases as shown in Figure 6.12.b. A simulated reconstruction of a bump that is caused by an external force is shown in Figure 6.13. A static nonlinear pattern is projected on the pipe wall, and then the scanner system is moved into the pipe while the camera continues to image the pipe wall. The defect on the pipe wall was reconstructed successfully 98 Figure 6.13: Simulated 3D reconstruction bump defect on the pipe internal surface. but with some artifacts. Similar to the rectangular geometry case, the artifacts are related to the non-constant phase shifting from the nonlinear relationship between the phase distribution on the pipe wall and the pipe diameter. 6.8.2.2 Model B This design uses two stereo cameras that are facing the pipe wall accompanied by a projector that is aiming in the same direction. This design achieves a 360-degree view by using multiple rectangular stereo systems to cover all the different directions. This topology represents a direct implementation of the rectangular MPMP sensor for pipe geometry. The advantage of this topology is that it reduces the total length of the sensor, eliminates the need for glass to connect the projector, and eliminates the shadows from the projector wires. An experimental prototype of this module is shown in Figure 6.14.b. The module is fabricated to have a wide field of view and small size to be inserted inside an 8-inch pipe. The cameras have a 175-degree diagonal field of view (140 degrees horizontal, 105 horizontal). The module has a length of 3 inches and a height of 1.5 inches. The sensor was tested by inspecting the PVC pipe shown in Figure 6.14.a. Two defects with different orientations were engraved into the pipe wall to simulate material loss. A sample image is shown in Figure 6.14.c. A 2D representation of the disparity map is shown in Figure 6.14.d. 2D images of the defects for comparison are shown in Figure 6.14.e and Figure 6.14.g. The results show that the 99 pipe profile and the defects were reconstructed successfully. A close comparison to the 2D images shows that the sensor was able to detect the submillimeter scratches near the defect in the middle of the pipe. There are still some artifacts on the edges of the frame due to the severe distortion of the projected image. From the reconstruction results, the downside of this topology can be summarized as follows: • Strong distortion to the projected pattern due to the usage of a wide-angle projection lens. This results in nonuniform phase distribution across the field of view and the edges tend to have a lower resolution because the pattern frequency is lower on the edges of the frame. • Complexity of the projection system because multiple projection modules are needed to cover a 360-degree field of view. • Distortion of the projection pattern from the use of multiple projection modules in the shared edges between each two projection modules. One solution for the distortion issue is to use synchronized illumination so that no two projectors are on at the same time. Figure 6.14: a) 6-inch PVC pipe with internal defects, b) Miniaturized sensor, c) Raw image from the sensor, d) 2D representation of the reconstructed disparity map, e) Photographic image of the first defect, f) Zoomed section of the disparity map showing the 2nd defect, g) Photographic image of the second defect. 100 Figure 6.15: Implementation of model B with a) Direct implementation, b) Backward-looking projector with a spherical mirror. 6.8.2.3 Model C In this design, multiple cameras cover different parts of the field of view while a single projector is used to illuminate the scene. This design offers the following advantages: • The complexity of the design is reduced by using only a single light projector. Reducing the resolution of the projection system is not critical since lens defocusing is used in the system as a low pass filter to produce the needed sinusoidal fringes. • Eliminate the need for very wide-angle projection lenses which allows better control over the lens distortion • Eliminate the distortion from using multiple projectors on the edges of the frame. A direct implementation of the model in this design is explained in Figure 6.15.a. One of the drawbacks of this model is the wire connection needed to power the slide projector and the limited depth of field of the projection lens. The problem of the connections can be solved by coating the power connections on the surface of the transparent glass tube by using plasma sputtering. Another solution is based on using a projector coupled with a mirror to create a backward-looking projector as shown in Figure 6.15.b. This type of projector offers the following advantages: • Eliminates the need for the sputtering process needed to power the projector • Reduces the total size of the projector since the projector hardware can be embedded beneath the stereo cameras. 101 (a) (b) Figure 6.16: a) Initial prototype of model C, b) Inspection results of a PVC by using the initial prototype of model C. The main drawback of this system is that it requires high precision during the fabrication process to guarantee a correct alignment between the main projector and the reflective mirror. The initial prototype in Figure 6.16.a represents a direct implementation of model C similar to the schematic in Figure 6.15.a. The sensor was used to inspect the same pipe that was used to test model A and is shown in Figure 6.14.a. The inspection results are shown in Figure 6.16.b. The inspection results show that both of the wall loss defects on the pipe wall were detected and reconstructed. We also notice the existence of one artifact which was related to some errors during the decoding process. 6.9 Conclusion In this chapter, we presented a structured light sensor for the inspection of the pipe’s internal surface that is based on movement-enhanced PMP. The sensor exploits the sensor movement to enhance the quality of the 3D reconstruction. The active stereo vision was used to enhance the sensor performance by eliminating the need for a regularization algorithm. A reconstruction algorithm was developed and validated by using a simulation environment. A rectangular geometry 102 sensor was fabricated and validated by using scanning a flat corroded sample. The results show that the defect was detected successfully, and the 3D profile was reconstructed. A cylindrical geometry sensor was also designed and fabricated. The results show that the sensor can detect wall loss defects on plastic pipes with high resolution in the middle of the frame. The results also showed some artifacts on the edge of the reconstructed frame which are mainly related to errors in undistorting the fisheye image on the edge of frame. 103 CHAPTER 7 THERMOACOUSTIC IMAGING: A SIMULATION STUDY 7.1 Introduction Matter has the ability to support the propagation of different types of waves, many of which can be used to provide specific information and be exploited in imaging technologies for a wide range of applications from bio-medicine, and structural health monitoring to surveillance. In all imaging technologies, researchers have striven to improve overall contrast and resolution with different measurement techniques and processing algorithms. Unfortunately, single wave imaging technologies are inherently limited by either the wavelength or the absorption depth of the radiated waves in media. Ultrasound imaging is a highly mature technology, in which medical diagnostic frequencies are typically 1-20 MHz [103]. However, for certain materials, the acoustic reflection and absorption coefficients do not exhibit significant variation, causing reduced contrast. For instance, in biomedical applications, poor contrast in ultrasound images is often encountered between soft tissue types (i.g. muscle vs. fat), due to similar acoustic impedances (or densities) of these two materials, which yield similar reflection coefficients. A primary advantage of microwave imaging is that contrast is based upon the permittivity and conductivity of the material being imaged. An active area of research for microwave imaging is early breast cancer detection. It is worth noting that the resolution of microwave imaging is not necessarily subject to the diffraction limit (based on the wavelength the diffraction limit is the angular separation of two sources that can be distinguished), depending on the measurement/processing technique used. However, traditional microwave imaging resolution is governed by diffraction when working in the far-field. Thus for most microwave frequencies used in biomedical applications that are below 12 GHz, the size of the smallest object that can be discerned is on the order of several centimeters or longer. A novel way to overcome the limits of the aforementioned single-wave imaging is to synthesize different imaging modalities in what is known as multi-wave or hybrid imaging where the underlying 104 concept is to combine the advantage of one type of wave, (i.e., one that yields high spatial resolution) with another wave (i.e., one that yields high contrast) [104]. In multi-wave imaging, the interrogating signal can be physically different from the observed signal. For example, in thermoacoustic imaging, the pulsed microwaves are the interrogating signal, and the observed signals are ultrasound waves. The thermoacoustic effect we investigate uses pulsed microwaves, which are absorbed by the medium. When the medium absorbs a burst of microwave energy, it quickly heats and cools. The effect of the sample heating and cooling is known as thermoelastic expansion and can be observed by the compression waves that are generated. The thermoacoustic effect has been known for a long time when it was described for the first time by Alexander Graham Bell in 1880 [105]. He first demonstrated the existence of the photoacoustic effect by using a pulsed light source to create acoustic waves [105]. But it wasn’t until recent years when thermoacoustic imaging started to gain attention in the research community. The general scheme of thermoacoustic imaging is shown in Figure 7.1. The concept of thermoacoustic imaging was first proposed by Bowen [106] and proof-of-concept measurements were subsequently shown [107]. A Short high-energy radiation pulse is sent to the imaged sample to generate acoustic waves, then these waves are collected by an array of acoustic transducers to form the final image of the object. One of the first publications showing significant imaging capability was the work described by Kruger et al. [108], who reported successful imaging of a kidney using a 25 kW peak power pulsed microwave source, a microwave frequency of 434 MHz, and transducer center frequency of 1 MHz. The reported data were acquired with an array of transducers using a liquid coupling medium. Other significant works include investigations of thermoacoustic imaging as an early breast cancer detection method comparing radiograph, ultrasound, and thermoacoustic imaging on four excised breast samples [109]. The challenges of thermoacoustic image reconstruction and processing of data with a Time Reversal Mirror imaging technique were discussed by Chen et al. [110]. Absorption coefficients have been calculated and reduction of reconstruction errors by using a formula based on the least-squares solution have been reported [111]. The use of EM waves in the range of 100 to 200 MHz to improve the penetration depth of TAI systems for human tissues is 105 suggested by Eckhart et al. [112]. And the use of carbon nanotubes as a contrast agent to enhance the contrast of TAI for breast cancer detection is noted in ref. [113]. Figure 7.1: A schematic explaining the thermoacoustic imaging process Most of the research is currently focused on the use of thermoacoustic imaging for medical applications and especially for breast cancer detection in the case of microwave-induced thermoa- coustic imaging[114]. TAI is attractive as an imaging solution for breast cancer due to its high spatial resolution and good contrast in biological tissues. The malignant tissues tend to have higher water content than other healthy tissues; therefore they experience higher electromagnetic losses and therefore generate stronger sound waves[115]. Medical imaging is one of the potential applica- tions for TAI, but it is currently being studied to be a candidate for other nondestructive evaluation processes like detection of explosives embedded in biological tissues [116]. For this application, explosive devices are detected due to their small conductivity (0.001 S/m) when they are compared to biological tissues[116]. This low conductivity of explosives creates the required contrast in the TAI images and makes the explosive appears as dark objects inside the imaged target. In this chapter, we present the theory of the TAI aided by a hybrid finite element 3D-2D electromagnetic-acoustic model to simulate the multi-wave generation and investigate the imaging capabilities of the TAI system. 106 7.2 Excitation Sources for Thermoacoustic Imaging Different sources are used to excite the heat gradient inside the imaged targets. These different sources provide a variety of heat contrast and penetration depths due to the different frequency bands and the amount of energy. 7.2.1 Photoacoustic In this technique, an infrared light source is used as the excitation source, and the heat is generated from the light absorption by the tissues. The light source is either a highly focused laser beam that illuminates one point at a time or a strong light source that illuminates a whole organ in one shot. The current maximum penetration depth for this technique is 8.2 cm as was demonstrated in breast chicken tissues [117] by illuminating the sample from a single side. The penetration depth can be doubled by using two light sources to illuminate the sample from both sides. 7.2.2 Microwave Induced Thermoacoustic To enhance the penetration depth of TAI in biological tissues, microwave excitation is used instead of optical excitation. In this scheme, the target is excited by sending EM waves in the range of UHF frequencies to heat the target. The heat is generated from electromagnetic waves absorption in the material. The contrast, in this case, is based on the dielectric properties of the material, and the resulted image represents a conductivity map of the imaged target. Due to the high power requirements and the necessity of uniform field distribution in the near field region in most cases either horn antennas or open-ended waveguides are used to couple the EM signals to the imaged target although some experiments employ helical antennas. 7.2.3 Magnetically Mediated Thermoacoustic To extend the capabilities of the thermoacoustic imaging method to image materials with higher conductivity values than biological tissues lower frequencies are employed to excite the imaged 107 target. The process of reducing the frequency requires larger antennas to transmit the power to the imaged target. This results in the spread of the power over a large area and losing the ability to focus the EM power in a small area to achieve higher E-field values. To avoid this, coils are used to create an alternating magnetic field that is associated with an alternating electric field in the range of MHz frequencies [118]. This method provides a better ability to image conductive materials as was demonstrated by [118] to image a tapered metal strip. 7.3 Thermoacoustic Theory Thermoacoustic wave generation is an electromechanical phenomenon that results from the thermoelastic expansion of a medium due to pulsed electromagnetic illumination. This thermal expansion results in a series of mechanical (acoustic) waves that are governed by the following wave equation [119, p. 785] 1 𝜕2 𝛽 𝜕 2𝑇 (® 𝑟 , 𝑡) ▽2 𝑝(®𝑟 , 𝑡) − 𝑝(® 𝑟 , 𝑡) = − (7.1) 2 𝑐 𝜕𝑡 2 𝑘𝑐 2 𝜕𝑡 2 where 𝑐, 𝛽, 𝑘 are the sound speed, volume expansion coefficient, and the isothermal compressibility of the medium, respectively, and 𝑝(® 𝑟 , 𝑡) is the instantaneous pressure at coordinate 𝑟®. 𝑇 (®𝑟 , 𝑡) is the temperature rise of the imaged object at time 𝑡 and coordinate 𝑟®. The left side of the equation represents the acoustic wave equation, while the right side of the equation represents the thermoacoustic source. The thermoacoustic source depends on the second derivative of temperature in time, the volume expansion coefficient, and heat capacity. The heat capacity and volume expansion coefficient are constants that depend on the medium properties. When the thermal confinement condition is met, the source part becomes equals to 𝜕 2𝑇 (® 𝑟 , 𝑡) 𝜌𝐶𝑉 = 𝐻 (® 𝑟 , 𝑡) (7.2) 𝜕𝑡 2 [119, p. 785] where 𝜌, 𝐶𝑉 are the material density and specific heat capacity of the medium, 𝐻 (® 𝑟 , 𝑡) is the heating function that represents the amount of dissipated energy per unit volume and time by the excitation pulse. In general, for a thermoacoustic system to work efficiently, two conditions should be met: thermal confinement and stress confinement[6]. The thermal confinement condition 108 assumes that the excitation pulse is short enough so that the acoustic wave is excited before the occurrence of any significant heat conduction. This time is dependent on the time of thermal conduction of the dissipated power which can be approximated by 𝜏𝑡ℎ = 𝐿 𝑝 /4𝐷 𝑇 , where 𝐿 𝑝 is the characteristic dimension and 𝐷 𝑇 is the thermal diffusivity of the object being imaged. The pulse width should be smaller than 𝜏𝑡ℎ for any object to be imaged efficiently. On the other hand, stress confinement is related to the time for the stress to transit the heated region and can be approximated by 𝜏𝑠 = 𝐿 𝑝 /𝑐. For this condition to be met, the pulse width should be shorter than 𝜏𝑠 . The heating function can be written as the product of two separate terms 𝐻 (® 𝑟 , 𝑡) = 𝐴(® 𝑟 )𝐼 (𝑡) (7.3) where 𝐴(® 𝑟 ) is the absorbed energy per unit volume, and 𝐼 (𝑡) is the temporal envelope of the excitation pulse. For microwave-induced thermoacoustic imaging in biological tissues, the mag- netic losses are negligible. Therefore the total losses are approximated to be the summation of conductivity and dielectric losses 𝐴(®𝑟 ) = 𝜎(®𝑟 )|𝐸 (® 𝑟 )| 2 + 𝜔𝜀′′ |𝐸 (®𝑟 )| 2 (7.4) where 𝜎, 𝜀′′, 𝐸 (® 𝑟 ) are the material electrical conductivity, imaginary part of the permittivity, and the root mean square value of the applied electric field inside the target. By combining these relations, the thermoacoustic equation is rewritten as 1 𝜕2 𝛽 𝜕𝐻 (® 𝑟 , 𝑡) ▽2 𝑝(®𝑟 , 𝑡) − 𝑝(® 𝑟 , 𝑡) = − (7.5) 2 𝑐 𝜕𝑡 2 𝐶𝑝 𝜕𝑡 where 𝐶 𝑝 = 𝜌𝑐2 𝑘𝐶𝑉 is the specific heat capacity of the medium under constant pressure. 7.4 Excitation Methods As mentioned earlier thermoacoustic imaging requires a short pulse to initiate the thermoelastic expansion and diminishes before causing any significant heat conduction. In the conventional method, an incoherent short pulse is sent individually to the imaged target then the acoustic signals are recorded, and the image is formed directly. To enhance the detectability of the signals, coded 109 continuous excitation is suggested. In this scheme, the excitation signal is coded with a predefined pattern that can be matched during the reconstruction process. One of the coding methods is to use stepped frequency coding or use a linear frequency modulation of the microwave signal. The high-frequency carrier signal (GHz range) is frequency modulated with a low frequency (MHz range) modulating signal. Matched filters are then used to reconstruct and detect these transmitted patterns [105]. Coherent excitation enhances the quality of the reconstruction by providing a better signal-to-noise ratio, but it also increases the exposure time to microwave radiation. 7.5 Coexistence With Other Physical Phenomena The generation mechanism of thermoacoustic signal could lead to the generation of extra acoustic signals from other physical interactions. One of the examples is the generation of magne- toacoustic waves at the same time with magnetically mediated thermoacoustic imaging in water in the presence of a static magnetic field. In addition to the thermal effect, the static magnetic field excites another force inside the material due to its interaction with the created conduction currents. In the case of the existence of the static magnetic field, a time varying Lorentz force created inside the material. The excited Lorenz force 𝑭(𝑟, 𝑡) is given by [118] 𝑭(®𝑟 , 𝑡)𝑖 = 𝜎(® 𝑟 , 𝑡) × 𝑩0𝑘 (® 𝑟 )𝑬 𝑗 (® 𝑟) (7.6) 𝑬 (® 𝑟 , 𝑡) represents the applied electric field by the coil and 𝑩(® 𝑟 ) represents the external static magnetic field. 𝑖, 𝑗, 𝑘 are the three directions of the coordinate system. This alternating Lorentz force creates another acoustic wave source inside the imaged sample with a frequency equals to two times the frequency of the excitation RF signal. The magnetoacoustic effect generates both longitudinal and shear waves. The resulted longitudinal acoustic waves are parallel to the Lorentz force direction, and it is described by the following equation 1 𝜕 ▽2 𝑝 𝑚𝑖 (® 𝑟 , 𝑡) − 𝑝 𝑚𝑖 (® 𝑟 , 𝑡) = ▽ · (𝜎(® 𝑟 , 𝑡) × 𝑩0𝑘 (® 𝑟 )𝑬 𝑗 (® 𝑟 )) (7.7) 𝑐2 𝜕𝑡 110 𝑝 𝑚𝑖 is the magnitude of longitudinal waves in the ith direction. The generated sheer waves are perpendicular to the direction of Lorentz force and given by 1 𝜕 ▽2 𝑝 𝑠𝑖 (® 𝑟 , 𝑡) − 𝑝 𝑠𝑖 (® 𝑟 , 𝑡) = ▽ · (𝜎(® 𝑟 , 𝑡) × 𝑩0𝑖 (® 𝑟 )𝑬 𝑗 (® 𝑟 )) (7.8) 𝑐2 𝜕𝑡 𝑝 𝑠𝑖 is the magnitude of the surface waves in the ith direction. In this scenario, two acoustic waves with two distinct frequencies are generated at the same time. Thermoacoustic waves give conductivity contrast while magnetoacoustic waves provide information about material elasticity. In microwave-induced thermoacoustic imaging, this effect is negligible because the frequency of the exciting signal is in the GHz range which results in very strong attenuation of the exciting signals. 7.6 Numerical Modeling A finite element numerical model was created with COMSOL Multiphysics to optimize the experimental setup through parametric studies and development of reconstruction algorithms, namely time reversal. The model is divided into an electromagnetic and acoustic modules that are coupled sequentially. The electromagnetic module simulates the electromagnetic wave propagation according to 𝑗𝜎 ∇ × 𝜇𝑟−1 (∇ × 𝑬 (® 𝑟 )) − 𝑘 02 (𝜀𝑟 − )𝑬 (® 𝑟) = 0 (7.9) 𝜔𝜀 0 where 𝜇 is the magnetic permeability, 𝜀𝑟 is the electric relative permittivity, 𝜔 is the wave angular frequency, and 𝑘 0 is the propagation constant. The electromagnetic absorption density 𝐻 (® 𝑟 , 𝑡) inside the sample is calculated according to Equation (7.3) where it mainly depends on two parameters, the intensity of the electric field (𝐸) and the electrical conductivity (𝜎). 𝐼 (𝑡) represents the temporal profile of the electromagnetic energy that is injected into the model as a Gaussian function. The ultrasound excitation (𝑄 𝑡 ) is defined by 𝜌𝛽 𝜕 𝑄𝑡 = 𝐻 (® 𝑟 , 𝑡) (7.10) 𝐶 𝑝 𝜕𝑡 111 where the heat change is differentiated with respect to time. This ultrasound source is then inserted as a monopole source on the right of the acoustic wave equation as 1 𝜕2 𝑝 1 + ∇ · (− (∇𝑝)) = 𝑄 𝑡 . (7.11) 𝜌𝑐2 𝜕𝑡 2 𝜌 After the excitation of the acoustic source, the thermoacoustic wave propagation inside the domain is simulated, and the final data are recorded for evaluation or reconstruction. 7.6.1 Model Geometry We simulate a WR340, D band waveguide with cross-section dimensions of 86.36 mm x 43.18 mm and excite it with a 2.45 GHz electromagnetic signal. The waveguide is modeled as an air-filled rectangular enclosure with perfect electric conductor boundaries. The waveguide is excited with a wave port that couples a single transverse electric mode (TE10 ) and is connected to an acrylic tank that is filled with safflower oil for acoustic coupling. The tank has a wall thickness of 3 mm. A schematic of the simulated geometry is shown in Figure 7.2. Figure 7.2: Simulation Geometry. Electromagnetic wave propagation is simulated inside the waveguide, and the safflower oil tank and the simulation domain are terminated by absorbing boundaries at the top and side of the acrylic tank. A perfect electric conductor boundary is assumed at the bottom of the tank because the experimental setup base of the tank is made of aluminum which has a very high conductivity of 112 (10.79 × 106 S/m) [120] at microwave frequencies. The acoustic wave excitation and propagation simulation is performed in a smaller region of the geometry around the target due to the slow speed of sound. The simulated acoustic domain is terminated with an absorbing boundary condition. 7.6.2 Model Validation A circular target with biological tissue properties is employed to validate the model by comparison to experimental work [1] and simulation results [115]. The model simulates the generation of thermoacoustic signals by a circular target that has a radius of 6 mm, and this target is made of ethylene glycol with different concentrations of air microbubbles. The different concentrations of Figure 7.3: Thermoacoustic simulation of different micro-bubbles concentrations suspended in ethylene glycol (dashed lines) compared to experimental results (solid lines) from reference [1]. air micro-bubbles introduce different electric and mechanical properties inside the imaged object. Table 7.1 provides details of the change in material properties with bubble concentration and shows that both electrical conductivity and permittivity of the imaged target decrease with the increase of the air micro-bubbles concentration while the sound speed increases. The imaged target is immersed in safflower oil and a virtual acoustic transducer is aligned in a position normal to the imaged sample surface. The electromagnetic source has a center frequency of 2.45 GHz, and the wave is modulated with a Gaussian pulse. Figure 7.3 shows the results of simulating these five different materials and compares the results with experimental results that were previously 113 published [1]. The results show that the amplitude of the signal decreases with the increase of the air micro-bubbles concentration (decrease of conductivity) and this is related to the decrease of the electromagnetic losses inside the material. The model also shows that the signal wavelength is decreased with the increase of air micro-bubbles concentration and this is related to the increase of sound speed. 7.7 Image Reconstruction Different algorithms have been proposed to for image reconstruction from thermoacoustic signals, and their performance varies depending on the target material properties and acquisition geometry [110]-[121]. In our experiment, image reconstruction is performed by using time-reversal due to its simplicity and robustness [122]. Time reversal is based on the reciprocity of the wave equation by assuming that the source of the signal can be uniquely reconstructed by inverting time and re-transmitting the signal to the same domain. According to Huygens principles, for any initial source that has bounded support, there is a finite time for the wave to leave the domain [122]. Therefore, a solution 𝑝 vanishes inside the domain after a finite amount of time 𝑇. After the time 𝑇, a zero initial value can be imposed on the domain, and the wave can be retransmitted to the domain in reversed timing sequence to reconstruct the source. The re-transmission of the wave again into the media with the same received pressure and boundary conditions results in the reconstruction of the initial source [123]. The reconstruction process is implemented by using two different software packages to ensure the accuracy of the model. Results from a hybrid COMSOL model were compared to results generated using Matlab k-Wave toolbox [124]. This can be mathematically Concentration 𝜎 (S/m) 𝜀𝑟 Sound speed m/s 0% 2.32 14.03 1660.7 20% 1.33 9.66 1729.8 30% 1.11 8.34 1805.6 35% 0.95 7.38 1922.6 40% 0.84 6.86 2049.4 Table 7.1: Change of permittivity, conductivity and sound speed with the change of micro-bubbles concentrations [1] 114 explained by solving the acoustic wave equation. By using green’s functions and by assuming that the source pulse is very short and can be approximated as Dirac delta function, the wave equation can be solved as below [123]. ∭ 𝛽 𝑑 𝑝(𝑟) = 𝐺 (® 𝑟 , 𝑟® ′) 𝐻 (®𝑟 ′, 𝑡 ′)𝑑® 𝑟 ′, (7.12) 𝑐𝑝 |® 𝑟 −®𝑟 ′| 𝑑𝑡 𝐻 (® 𝑟 ′, 𝑡 ′) = 𝐼0 𝐴(® 𝑟 ′)𝜂(𝑡 ′) (7.13) 𝜂(𝑡 ′) = 𝛿(𝑡 ′) (7.14) 𝐺, 𝐴, 𝜂, 𝐼0 are the medium Green’s function, microwave loss distribution, temporal profile of the microwave pulse and amplitude of microwave pulse, respectively. By exploiting the reciprocity of Green’s functions, the solution can be written as a convolution between the Green’s function of the medium and the wave source. 𝑝(®𝑟 ) = 𝐺 (® 𝑟 , 𝑟® ′) ⊗ 𝑝 𝑎 (® 𝑟) (7.15) ∭ 𝛽𝐼 𝑝(®𝑟) = 0 𝐴(®𝑟 ′)𝑑® 𝑟 (7.16) 𝑐𝑝 |® 𝑟 −® ′ 𝑟 | Where 𝑝 𝑎 is the induced pressure from 𝐴 that is enclosed in the sphere |® 𝑟 − 𝑟® ′ |. When the sphere size goes to zero, 𝑝 𝑎 can be given by ∭ 𝑝 𝑎 (® 𝑟) = 𝑝 𝛿 (®𝑟 ′)𝑑®𝑟 (7.17) |® 𝑟 −® 𝑟 ′| 𝛽𝐼0 𝑝 𝛿 (® 𝑟) = 𝐴(® 𝑟 )|𝑟®−®𝑟 ′ (7.18) 𝑐𝑝 The goal of the reconstruction is to retrieve the function 𝐴(® 𝑟 ) from the received pressure 𝑝(𝑟). The re-transmission of the wave again into the media with the same received pressure and boundary conditions results in the reconstruction of the initial source. In other words, the detected pressure is re-convoluted with the medium. 𝑝𝑇 (® 𝑟 ) = 𝐺 (® 𝑟 , 𝑟® ′) ⊗ 𝑝 𝛿 (𝑟) ⊗ 𝐺 (® 𝑟 ′, 𝑟®) (7.19) where 𝐺 (®𝑟 , 𝑟® ′) ⊗ 𝐺 (® 𝑟 ′, 𝑟®) is the self-correlation function of the medium and the re-transmitted wave field reconstructs the original source. 115 Optimal reconstruction of thermoacoustic tomography images requires that the target is sur- rounded by 360 degrees of transducers [125, p. 834]. This is an expensive approach and difficult to implement in practice. To reconstruct a point completely inside a homogeneous media, any line that passes through that point should intersect with the plane of the transducer at least one time. The case becomes more complicated in the case of inhomogeneous media because the waves are deflected due to medium inhomogeneity. To show this effect, a simulation with different transducer positions is performed. The first simulation uses an array of transducers that are positioned on the top of the simulated geometry (line array). The second simulation adds another array of transducers on the left side of the simulation domain (L-shaped). (a) (b) (c) (d) Figure 7.4: Effect of transducer orientation on image reconstruction, a) The simulated geometry, b) Reconstruction with transducers aligned linearly along the top, c) Reconstruction with transducers aligned in L shape along the top and left boundaries, d) A comparison of the amplitude along the dotted line in b and c. Figure 7.4 shows the reconstruction results with the positions of the different transducers. The simulated geometry is shown in (a) while (b), and (c) shows the reconstruction results with line and L shaped arrays, and (d) shows a comparison of the reconstruction along the dotted line in both of the cases and compares it with the original simulated object(negative pressure is neglected). The results show that the edges that are normal to the transducers’ plane are reconstructed clearly while the other edges are blurred. When the L-shaped array is used in (c) the edges of the rectangular object are reconstructed precisely while the circle still exhibits some blurred edges because the 116 array elements are not normal to those edges. The size of the transducer is required to be as small as possible because it controls the resolution of the reconstructed image. Most of the reconstruction algorithms assume point like Omni- directional transducers, but real transducers have finite size, and the recorded signal represents an integration of orthogonal components of the incident waves on the transducer surface, and it is given by ∫ 𝑃𝑑 = 𝑆 · 𝑑𝐴 (7.20) Where 𝑆, 𝑑𝐴 are the incident signal on the transducer and the surface unit area, i.e. the transducer’s signals represent the sum of all the signals that are crossing a specific area. This limits the resolution of the acquired images and makes it dependent on the transducer size and properties. The effect can be reduced by either using special transducers that have a thin and long structure or de-convolute the shape and size of the transducer with the reconstructed image. 7.8 Case Studies and Discussion A variety of experimental parameters can affect thermoacoustic signal intensity and character- istics. In this section, different scenarios are studied to show the effect of the excitation signal and imaged object properties on the thermoacoustic signal characteristics. 7.8.1 Simulation Study Of Image Reconstruction With Time Reversal Three objects with different shapes are placed in safflower oil to study the reconstruction with time-reversal as shown in Figure 7.5a. The samples are a circle with a radius of 1 mm, a square with a side length of 2 mm, and a rectangle with dimensions of 1.5 mm by 3 mm. The forward model employs the same setup that was explained previously. The model is excited with a pulse that has a carrier frequency of 2.45 GHz and a Gaussian shape with a pulse width of 40 nanoseconds. All the samples are set to have the same electrical conductivity of 0.4 S/m, a relative permittivity of 9 and a sound speed of 1537 m/s. The acoustic transducers are represented by an array of point-like transducers that are placed at the top of the simulation domain. Figure 7.5b shows a uniform 117 electromagnetic loss distribution inside the samples which indicates that the sample regions can be reconstructed well. The same forward acoustic simulation model is recreated by using the K-wave toolbox, and the signals are retransmitted in a reverse time sequence. Time reversal results are shown in Figure 7.5c and a thresholded version of it is shown in Figure 7.5d. The positions and sizes of the objects are reconstructed successfully. The case is the same for the edges that are normal to the transducer plane while we can see that the horizontal edges are blurred due to the orientation of the detection plane. (a) (b) (c) (d) Figure 7.5: Reconstruction with time reversal, a) Model geometry, b) EM losses distribution, c) Time reversal reconstruction, d) Image after neglecting negative values. 7.8.2 Effect of Material Conductivity In this study, the change of physical properties of the material in Table 7.1 is neglected, and only the conductivity is changed. The results are shown in Figure 7.6a, where increasing the conductivity increases the amplitude of the generated thermoacoustic signals. This behavior is a result of the increase of losses of electromagnetic waves according to Equation (7.4). However, increasing the conductivity above a certain threshold reduces the skin depth of the imaged object, which will be discussed later. The study is extended to 2D to show the effect of these results on the final image. Two circular objects with diameters of 1 mm and electrical conductivity of 0.75 S/m are embedded in another biological tissue that has lower conductivity (0.4 S/m). The results of electromagnetic 118 losses distribution in Figure 7.6b show a high contrast between the objects and the medium that they are embedded in due to the electrical conductivity differences. Figure 7.6c shows the reconstructed results while 7.6d shows a comparison between the EM losses and the reconstructed profiles of the imaged objects. Electromagnetic absorption thus determines the image contrast and overall signal strength. Since the conductivity plays an important role in controlling the magnitude of (a) (b) (c) (d) Figure 7.6: The effect of conductivity on TAI images, a) 1-D TAI signal variation with change of conductivity, b) Normalized EM losses distribution, c) Time reversal reconstruction, d) A 1D comparison between the EM losses and reconstructed TAI images at x = 0. the generated thermoacoustic signals, the optimal conductivity is investigated using the numerical model and simple plane wave analytical expressions. A sample with dimensions of 30 mm by 120 mm is simulated and a parametric sweep is performed over a wide range of electrical conductivities to calculate the amount of the EM loss inside the imaged sample. The loss calculation is performed by taking a line integral from the surface of the sample to 10 mm from the sample surface. The results of the study are shown in Figure 7.7. The figure shows the EM power losses for 𝜀𝑟 equals to one and three. The dashed lines are the results from a COMSOL model solution. The solid lines represent calculation results from direct analytical solution assuming plane wave propagation and normal incidence. In this case, the losses are calculated directly from evaluating the below integral 119 for different values of 𝜎 and 𝜀𝑟 ∫ 0.01 𝑊𝑙𝑜𝑠𝑠 = 𝜎𝐸 2𝑇 2 𝑒 −2𝛼𝑧 𝑑𝑧 (7.21) 0 where 𝑇 and 𝛼 are the transmission and the attenuation coefficients and they are given by √︄ √︂  𝜇𝜀 𝜎 2 𝛼=𝜔 1+ −1 (7.22) 2 𝜔𝜀 √︂ 2𝜂2 𝜇 𝑇= where 𝜂= (7.23) 𝜂2 + 𝜂1 𝜀 The integration represents the total dissipated power assuming a normally incident wave and neglecting any reflections inside the system. Results show that the EM power losses inside the target increase as we increase the conductivity to a certain level and then start to drop for higher conductivity. In the beginning, the increase can be explained according to Equation 7.4, where the loss is proportional to the electrical conductivity. After that, the skin depth of the sample starts to decrease until the EM wave is no longer able to penetrate through the sample. From the results, it can be inferred that the conductivity plays an important role in increasing the intensity of the thermoacoustic signal, but a high conductivity reduces the skin depth of the EM waves and results in thermoacoustic waves that are only generated from the surface of the imaged sample. The simulation agrees well with the analytical solution when 𝜀𝑟 = 1 but the peak deviates when Figure 7.7: Electromagnetic losses inside the imaged sample over wide range of 𝜎, a) Simula- tion(dashed lines), b) Analytical solution (solid lines). 120 𝜀𝑟 = 3. This can be related as mentioned earlier to the assumption of normal incidence, plane wave propagation, and to the effect of reflections in the system that are not considered in the analytical formulations. 7.8.3 Effect of EM Pulse Width In this study, different microwave pulses are transmitted with different pulse widths while the simulation environment is kept unchanged. The Gaussian pulse width is changed from 0.3 𝜇s-0.9 𝜇s. The results are shown in Figure 7.8a where it can be seen that increasing the pulse width directly increases the wavelength of the generated thermoacoustic signal. This means that decreasing the microwave pulse width directly enhances the resolution of the thermoacoustic images. The simulation is repeated to simulate the reconstruction of a complete sample with different EM pulse widths with a standard deviation of 4 microseconds. The results are shown in Figure 7.8b. Comparing these results with the reconstruction results in Figure 7.5d it can be seen that the spatial resolution of the reconstructed images is decreased with the increase of the EM pulse width due to the increase of the wavelength of the acoustic signal. (a) (b) Figure 7.8: The effect of EM pulse width on the reconstructed image, a) Model geometry, b) Image reconstruction with pulse width = 4 µs. 121 7.8.4 Open Ended Waveguide vs Horn Antenna TAI system requires an antenna that can handle a high amount of power with high efficiency because the system needs to transfer high power pulses (larger than 3KV) to the imaged target. The antenna should also have a relatively narrow near-field beam to concentrate the transmitted power inside the imaged object. Two antennas were chosen for the experiment. The first is a standard 20db horn antenna. The second is the WR340 Open-ended waveguide. A numerical model is created for the antennas to investigate the field distribution and intensity at the vicinity of the antennas. In both cases, the air is used as a propagation medium, and the simulation domain is terminated with a perfectly matched layer. The models are simulated with a frequency equal to 2.45 GHz, and a variable mesh size with maximum element size equals 10 mm. The Electric field intensity distribution at a distance equals 3 mm from the antenna is shown in Figure 7.9. The results show (a) (b) Figure 7.9: Electric field distribution near the antenna, a) Horn antenna, b) Open ended waveguide. that the open-ended waveguide has higher field intensity in the near field region with a maximum value of 800 V/m vs. a 160 V/m for the horn antenna. This is due to the distribution of the field over a larger area in the case of the horn antenna. On the other side, the horn antenna shows a more homogeneous distribution of the electric field than the open-ended waveguide. The simulation also shows that the reflection coefficient of the open-ended waveguide is higher than that of the horn antenna. Under the same testing conditions, S11=-12 db for the open-ended waveguide vs. -22 db for the horn antenna. 122 7.9 Conclusion Thermoacoustic imaging is a hybrid methodology that combines the advantages of microwave imaging and ultrasonography. This chapter presented the principles of thermoacoustic imaging aided by a hybrid finite element 3D-2D electromagnetic-acoustic model to simulate the multi-wave generation and investigate the imaging capabilities of the TAI system. The acquired thermoacoustic images represent the EM loss distribution inside the imaged target. The signal is maximized for conductivities near 1 S/m after which the skin depth of the electromagnetic wave decreases to an undesirable length due to the high conductivity of the medium. In the same manner, increasing the electromagnetic frequency generates a stronger acoustic signal due to greater absorption of the medium but also decreases the skin depth of the electromagnetic wave into the medium. The EM pulse width has a direct effect on the acoustic frequency (i.e. decreasing or shortening the pulse width increases the acoustic frequency and results in higher-resolution images). The time-reversal reconstruction algorithm was used for the reconstruction of the TAI due to its robustness and different case studies were implemented for different target shapes and transducer orientations. 123 CHAPTER 8 THERMOACOUSTIC IMAGING WITH CODED PULSE EXCITATION 8.1 Introduction The imaging process in TAI systems requires high power sources to generate acoustic signals with an adequate signal-to-noise ratio. Typical power sources used in pulsed TAI systems have peak power that ranges from multiple kilowatts [126] to tens of megawatts [127]. On the receiver side, averaging is usually used to reduce the noise power when adopting low peak power sources, but it leads to an increase in the acquisition time; moreover, it is difficult to use averaging when imaging moving objects. Longer RF pulses can also be used to increase the signal-to-noise ratio at the cost of decreasing the spatial resolution of the imaging system. Therefore, a large number of research work was devoted to designing RF sources with high peak power and very short pulses to have a system with high SNR and high spatial resolution [127]. Another approach to improve the SNR is to improve the RF coupling and concentrate the RF power in a small area. Near field, RF excitation is used widely in TAI systems to improve the power coupling of the electromagnetic energy to the imaged target [128]. Antenna arrays were also suggested to concentrate the electromagnetic power in a small area inside the imaged target [129]. Another approach is to use coded excitation to improve the signal SNR by using a correlation receiver. The use of frequency modulated continuous excitation sources with matched filters on the receiver side highly improves the SNR of TAI systems and allows the usage of low peak power sources [105]. The drawback of this approach is that it requires linear high-power amplifiers; therefore, it is not compatible with the pulsed TAI systems. In this Chapter, we propose the use of non-coherent pulse compression (NPC) to improve the SNR of pulsed TAI systems. In this approach, a coded pulse is used to excite the imaged sample and then the received pressure signal is correlated with a template that is related to the power profile of the excitation pulse. The proposed approach does not require the use of linear amplifiers; therefore, it can be easily applied to pulsed TAI systems without major modification. The approach 124 only requires control over the timing of the pulse excitation. The proposed method highly enhances the received signal SNR when compared with the pulsed TAI system with the same peak power and number of averages. A pulsed TAI imaging system was built to test the performance of the proposed approach. The results show that the proposed approach can be used to reduce the acquisition time and reduce the microwave source peak power while still having adequate SNR. The rest of the Chapter is organized as follows: Section 8.2 gives a brief description of the governing equation of TAI signal generations and describes the methods used to apply the noncoherent pulse compression procedure, Section 8.3 explains the approach used to reconstruct the TAI images, Section 8.4 describes the experimental system used to validate the proposed approach, Section 8.5 discusses the experimental results from the imaging phantoms under different excitation scenarios, and Section 8.6 concludes the paper. 8.2 Theory of NPC Enhanced TAI 8.2.1 Fundamentals of Thermoacoustic Imaging Thermoacoustic wave generation is an electromechanical phenomenon that results from thermoe- lastic expansion of a medium due to an electromagnetic illumination. The thermal expansion results in a series of mechanical (acoustic) waves that are governed by the wave equation: 1 𝜕2 𝛽(® 𝑟 ) 𝜕 2𝑇 (® 𝑟 , 𝑡) ▽2 𝑝(® 𝑟 , 𝑡) − 𝑝(®𝑟 , 𝑡) = − (8.1) 𝑐2 (® 𝑟 ) 𝜕𝑡 2 𝑘 (® 𝑟 )𝑐2 (® 𝑟 ) 𝜕𝑡 2 where 𝑐(® 𝑟 ), 𝛽(® 𝑟 ), and 𝑘 (® 𝑟 ) are the sound speed, volume expansion coefficient, and isothermal compressibility of the medium, respectively. To obtain this model, we have assumed that the medium density varies slowly in space. 𝑝 (® 𝑟 , 𝑡) is the instantaneous pressure at position 𝑟® and time 𝑡. 𝑇 (®𝑟 , 𝑡) is the instantaneous temperature of the imaged object at time 𝑡 and position 𝑟®. The left hand side of the equation represents the acoustic wave equation, while the right hand side of the equation represents the thermoacoustic source. The instantaneous thermoacoustic source depends on the second derivative of temperature in time while the volume expansion coefficient and the heat capacity are constants that depend on the medium properties. When thermal confinement condition 125 is met, the source part is equal to 𝜕𝑇 (® 𝑟 , 𝑡) 𝑟 ) 𝐶𝑉 (® 𝜌(® 𝑟) = 𝐻 (® 𝑟 , 𝑡) , (8.2) 𝜕𝑡 where 𝜌(® 𝑟 ) and 𝐶𝑉 (® 𝑟 ) are the material density and specific heat capacity of the medium, re- spectively, and 𝐻 (® 𝑟 , 𝑡) is a heating function that represents the amount of dissipated energy per unit volume and unit time induced by the excitation pulse. By combining these relations, the thermoacoustic equation is rewritten as 1 𝜕2 𝛽(® 𝑟 ) 𝜕𝐻 (® 𝑟 ′, 𝑡) ∇2 𝑝 (®𝑟 , 𝑡) − 𝑝 (® 𝑟 , 𝑡) = − (8.3) 𝑐2 (® 𝑟 ) 𝜕𝑡 2 𝐶 𝑝 (® 𝑟) 𝜕𝑡 Where 𝐶 𝑝 (® 𝑟 )𝑐2 (® 𝑟 ) = 𝜌(® 𝑟 )𝑘 (® 𝑟 )𝐶𝑉 (® 𝑟 ) is the specific heat capacity of the medium under constant pressure. The heating function can be written as a product of two separate terms, 𝐻 (® 𝑟 , 𝑡) = 𝐴 (® 𝑟 ) 𝐼 (𝑡) (8.4) where 𝐴 (® 𝑟 ) is the absorbed energy per unit volume and time, and 𝐼 (𝑡) is the temporal envelope of the power of the excitation pulse. Magnetic losses are negligible in nonmagnetic materials; therefore, the total loss is approximated by the sum of conductivity and dielectric loss, 𝐴 (® 𝑟 ) = 𝜎 (® 𝑟 ) |𝑬 (® 𝑟 )| 2 + 𝜔𝜀′′ (® 𝑟 ) |𝑬 (® 𝑟 )| 2 , (8.5) where 𝜎 (® 𝑟 ) is the material electrical conductivity, 𝜀′′ (® 𝑟 ) is the imaginary part of the permittivity, and 𝑬 (® 𝑟 ) is the root mean square value of the applied electric field inside the target. The two terms in Equation 8.5 describe the conductivity losses and dielectric losses, respectively, and they dominate the thermal energy generation in the tissue. Under the stress confinement condition, we can express the initial pressure 𝑝 0 generated at position 𝑟® as [130, 105] 𝛽𝑐2 𝐽 (® 𝑟) 𝑝 0 (® 𝑟) = , (8.6) 𝐶𝑝 ∫ 𝜏 𝐽 (® 𝑟) = 𝐻 (® 𝑟 , 𝑡)𝑑𝑡, (8.7) 0 126 where 𝐽 (® 𝑟 ) is the total energy absorbed per unit volume at position 𝑟® during microwave excitation with pulse width 𝜏. This expression indicates that in order to to increase the pressure, we can either increase the peak microwave power or increase the pulse width 𝜏. To obtain the solution for equation (8.1), we can use Green’s function for the wave equation. The time-dependent Green’s function satisfies 1 𝜕2 (∇2 − )𝐺 (®𝑟 , 𝑡; 𝑟®′, 𝑡 ′) = −𝛿(® 𝑟 − 𝑟®′)𝛿(𝑡 − 𝑡 ′). (8.8) 𝑐2 (® 𝑟 ) 𝜕𝑡 2 Then the pressure wave 𝑝(® 𝑟 , 𝑡) can be obtained by 𝑟 ′, 𝑡 ′) ′ ′ ∫ ∫ 𝛽 𝜕𝐻 (® 𝑝(® 𝑟 , 𝑡) = 𝐺 (® 𝑟 , 𝑡; 𝑟®′, 𝑡 ′) 𝑑®𝑟 𝑑𝑡 (8.9) 𝐶𝑝 𝜕𝑡 ′ If 𝑐 is constant and 𝑟® ∈ R 3 , the Green’s function is given by 1 𝐺 (® 𝑟 , 𝑡; 𝑟®′, 𝑡 ′) = ′ 𝛿(|® 𝑟 − 𝑟®′ | − 𝑐|𝑡 − 𝑡 ′ |). (8.10) 4𝜋𝑐|® 𝑟 − 𝑟® | Therefore, the formula for pressure wave can be obtained as 𝑟 ′, 𝑡 ′) ∫ 1 𝛽 𝜕𝐻 (® 𝑝(®𝑟 , 𝑡) = ′ ′ ′ ′ 𝑟 ′. 𝑑® (8.11) 4𝜋𝑐|® 𝑟 − 𝑟® | 𝐶 𝑝 𝜕𝑡 𝑡 =𝑡−|® 𝑟 −® 𝑟 |/𝑐 8.2.2 Noncoherent Pulse Compression Technique For signal reconstruction at the receiver end, pulsed TAI systems use averaging to improve the SNR when using low peak power at the microwave source end. Adding two signals increases the signal √ power by a factor of two while the noise power increases by a factor of 2; therefore, the overall √ SNR increases by a factor of 2. With pulse compression, when sending a waveform with twice the energy of the original signal, the signal energy increases by a factor of two while the noise power remains constant; consequently, this leads to an increase in SNR by a factor of 2. Therefore, better overall system efficiency is achieved with a faster acquisition speed. Pulse compression is a widely used concept in radar systems where the pulse energy is spread over a long time span to avoid the need for high peak power sources, and pulse encoding is used to retain fine range resolution [131]. The simplest example of pulse compression is to increase the 127 pulse length of the excitation signal and lower its amplitude. The drawback of such an approach is that the system resolution will be reduced due to the increase in the width of the excitation pulse. Correlation of such pulse results in a triangular function with a scaling factor that equals the length of the rectangular excitation pulse. Therefore, pulse coding is used to improve the correlation properties of the received signal. Normalized magnitude (a.u) Normalized magnitude (a.u) Normalized magnitude (a.u) 1 1 1 0.8 0.8 0.5 0.6 0.6 0 0.4 0.4 0.2 -0.5 0.2 0 0 -1 -0.2 0 200 400 600 800 1000 0 500 1000 1500 0 200 400 600 800 1000 Delay (samples) Delay (samples) Delay (samples) (a) (b) (c) Figure 8.1: a) Zero stuffed MPSL25 sequence; b) Mismatched filter for zero stuffed MPSL25; c) Correlation results of MPSL25 with the mismatched filter. The goal is to create a pulse sequence with a sharp peak at the location of maximum correlation and minimum side lobes elsewhere. One of the popular sequences is Barker codes due to their low correlation side lobes which are inversely proportional to the length of the coding sequence (peak to sidelobe ratio (PSL) = 𝑁 for a sequence with a length of 𝑁). A Barker13 sequence is the longest known sequence of Barker codes and it is given by 𝑆 𝐵 = [+1 + 1 + 1 + 1 + 1 − 1 − 1 + 1 + 1 − 1 + 1 − 1 + 1] (8.12) A single pulse that is coded with Barker13 is shown in Figure 8.2.a. With matched filtering, the received signal is correlated with the same profile of the excitation signal. The result of correlating this signal with a matched filter is shown in Figure 8.2.b. The correlation result shows a PSL of 13 (equals to the length of the coding sequence) which is expected for this bipolar sequence. For a pulsed TAI system, uni-polar codes are needed because the pulsed radiofrequency (RF) source can only alternate between two states of 0 and 1. The EM losses and the generated pressure also only depend on the temporal power profile of the excitation pulse. Using the uni-polar sequence of Barker13 that is shown in Figure 8.2.c leads to a degradation in the correlation performance and lowers its peak to sidelobe ratio as shown in Figure 8.2.d. The figure shows that the PSL 128 ratio has decreased from 13 to 1.8 which can distort the detected signal especially in the case of neighboring defects. This low peak to sidelobe ratio is directly related to the unipolar nature of the excitation signal. Two methods can be used to address this degradation effect: mismatched filters and complementary Manchester coded sequences [132]. Normalized magnitude (a.u) Normalized magnitude (a.u) Normalized magnitude (a.u) Normalized magnitude (a.u) 1 1 1 1 0.8 0.8 0.8 0.5 0.6 0.6 0.6 0 0.4 0.4 0.4 -0.5 0.2 0.2 0.2 -1 0 0 0 200 400 600 800 200 400 600 800 200 400 600 800 200 400 600 800 Delay (samples) Delay (samples) Delay (samples) Delay (samples) (a) (b) (c) (d) Figure 8.2: a) A single pulse coded with bipolar Barker13 sequence. b) The results of correlating a signal coded with bipolar Barker13 with a its matched filter. c) A single pulse coded with uni-polar Barker13 sequence. d) The results of correlating a signal coded with a uni-polar Barker13 with its matched filter. 8.2.2.1 Mismatched Filter In this approach, the uni-polar version of the excitation code is sent without any changes. On the receiver side, the signal is correlated with a weighted function that is designed to minimize the amplitude of side-lobes while maintaining a large peak at zero. In our experimental setup we opted to use the maximum peak to side lobe (MPSL) codes as a pulse coding sequence because they offer longer sequences than Barker codes (Barker codes have maximum length of 13). An MPSL25 sequence is given by 𝑆 𝑀 𝑃𝑆𝐿25 = [1001001010100000011100111]. (8.13) A more practical version of this code can be achieved by using zero stuffing to ensure constant pulse width across the sequence. This approach ensures that the resolution is governed by the pulse width that is used in the sequence. A zero stuffed MPSL25 is given by 𝑆 𝑀 𝑃𝑆𝐿25𝑧 = [1000001000001000100010000 (8.14) 0000000001010100000101010]. 129 A graphical representation of the zero stuffed MPSL25 is shown in Figure 8.1.a. A mismatched filter with a length of 150 (three times the length of the excitation sequence) is shown in Figure 8.1.b. A Cross-correlation of the MPSL25 code with the mismatched filter template is shown in Figure 8.1.c. The results show highly suppressed side-lobes when compared to matched filtering results (PSL=56.4). 8.2.2.2 Complementary Manchester Coded Pairs Complementary pairs have unique correlation characteristics where the summation of the autocor- relation functions results in zero side lobes [133]. This property is associated with using a bipolar signal of -1 and 1; therefore, using a unipolar sequence degrades the correlation performance. One solution to this problem is to use Manchester coding to convert the code to a unipolar sequence, where every zero is converted to 01 and everyone is converted to 10 [134]. A unipolar version of a complementary sequence pair with a length of 26 is given by 𝑆𝐶1 = [00011000101101010110010000], (8.15) 𝑆𝐶2 = [00001001101000001011100111]. (8.16) The Manchester coded sequences are given by 𝑆𝐶1𝑀 = [0101011010010101100110100110 (8.17) 011001101001011001010101], 𝑆𝐶2𝑀 = [0101010110010110100110010101 (8.18) 010110011010100101101010]. Graphical representations of 𝑆𝐶1𝑀 and 𝑆𝐶2𝑀 are shown in Figure 8.3.a and Figure 8.3.b, respec- tively. On the receiver side, each received signal is correlated with its bipolar version (𝑆 𝐴𝑀 𝐵 , 𝑆 𝐵𝑀 𝐵 ), and the final correlation (R) is then calculated by summing the two correlations together. 𝑆 𝐴𝑀 𝐵 = 2𝑆 𝐴𝑀 − 1, 𝑆 𝐵𝑀 𝐵 = 2𝑆 𝐴𝑀 − 1 (8.19) 130 𝑅 = 𝜌 𝑃 (𝑆 𝐴𝑀 , 𝑆 𝐴𝑀 𝐵 ) + 𝜌 𝑃 (𝑆 𝐵𝑀 , 𝑆 𝐵𝑀 𝐵 ) (8.20) 𝜌 𝑃 refers to the Pearson correlation between two signals. A graphical representation of the sum- mation of the correlation function is shown in Figure 8.3.c. The figure indicates high suppression of the side lobes (PSL=52). The figure also shows the creation of two negative large peaks in the vicinity of the main peak of correlation that can lead to a distortion in the final reconstructed TAI image. Normalized magnitude (a.u) Normalized magnitude (a.u) Normalized magnitude (a.u) 1 1 1 0.8 0.8 0.5 0.6 0.6 0.4 0.4 0 0.2 0.2 0 0 -0.5 100 200 300 400 500 100 200 300 400 500 200 400 600 800 Delay (samples) Delay (samples) Delay (samples) (a) (b) (c) Figure 8.3: a) First waveform of the Manchester coded complementary sequence; b) Second waveform of the Manchester coded complementary sequence; c) Summation of the correlation function. 8.3 Image Reconstruction Various approaches have been proposed for image reconstruction from thermoacoustic signals [135, 136, 110, 137, 111]. The performance of these algorithms varies depending on the sound speed property and observation surface geometry [138]. The time-reversal method is the least restrictive. Due to the simplicity and robustness, time reversal-based method is used to reconstruct the image in the implementation. Time reversal is based on the reciprocity of the wave equation which means that we can re-transmit the measured signal on a closed surface in a time-reversed chronology to uniquely reconstruct its source. According to the Huygens’ principle, if the sound speed is constant and the spatial dimension is odd, the wave leaves a bounded domain in a finite time for any initial source with bounded support; in other words, there is a time 𝑇 when the pressure wavefields become zero inside the observation domain for any 𝑡 > 𝑇; therefore, we can set the pressure wave to be zero at time 𝑇, and we then use the measured data on the observation surface to uniquely reconstruct the 131 initial source in the reversed time series; in this case, the initial source can be reconstructed exactly. When the sound speed is not constant or the spatial dimension is even, the pressure wave does not vanish at any time in the observation domain; the local decaying results given in [138] indicate that, in this case, the time-reversal method can also give a good approximation. To improve the TAI mage quality, an advanced method based on the Neumann-Series approach is further developed [139, 140]. (a) (b) (c) (d) Figure 8.4: Effect of interpolation of the measurement data. a) The simulated geometry with 50 transducers evenly distributed on a circle. b) The reconstruction without interpolation of the measurement data. c) The reconstruction with the interpolation of the measurement data. d) The detailed comparison at the line shown in a). To obtain optimal reconstruction, the time-reversal method requires full-aperture information. However, this is not applicable in practice. In the experiment, we only have a limited number of transducers distributed on the observation surface. To have better reconstruction from limited data, we need to interpolate the measurement data onto a continuous measurement surface and then reconstruct the image using the time-reversal method. To show this, a simulation with a limited number of transducers evenly distributed on the measurement surface is performed. The reconstruction process is implemented by using the Matlab k-Wave toolbox [124]. Figure 8.4 shows the reconstruction results of the two methods. Figure 8.4.a shows the simulated geometry with the positions of transducers indicated. Figure 8.4.b and Figure 8.4.c show the reconstructions from the time-reversal method without and with interpolation of the measurement data, respectively. Figure 8.4.d shows a comparison of the reconstructions along the line shown in Figure 8.4.a. The results show that the edges of the original image are blurred when reconstructing the image without interpolation of the data. Reconstruction with the interpolation of the data can sharpen the 132 edges and improve the contrast significantly as shown in figures since the data set is enriched by interpolating the measurement data onto the measurement surface. 8.4 Measurement Setup Rotating Open ended Serial connection sample holder WG Stepper Motor Sample holder Pulse transceiver TAI/PE CLK Signal US transducer Oscilloscope/ US transducer Pulse generator Sample under Ethernet test EM shielding CLK Pulse RF WG PC MW source (a) (b) Figure 8.5: a) Schematic of the experimental setup; b) Photographic picture of the experimental setup. A thermoacoustic imaging system in its simplest form mainly consists of an RF excitation source and an ultrasound sensor that collects the generated ultrasound waves. In the experiment, we built a computed tomography system to produce 2D sectional images of the imaged samples. The thermoacoustic wave was excited by an Epsco PG5KB pulsed microwave source with 5 kW maximum excitation and an operating frequency of 2.45 GHz. The pulse width can be varied from 0.3 to 30 𝜇s. An open-ended waveguide was used to couple the RF signal to the imaged target. An open-ended waveguide was used due to the high-power density in the near field when compared to a horn antenna [126]. The waveguide is connected to a glass tank that is filled with mineral oil. Mineral oil acts as a coupling medium for the ultrasound waves from the imaged target to the ultrasonic transducer. Mineral oil also has a low dielectric constant and low electromagnetic losses in the frequency range of the excitation signal. These factors are necessary for reducing the reflections from the excitation antenna (open-ended waveguide) and increasing the contrast of the imaged object when compared to the imaging medium. The ultrasound signals were collected by using an Olympus V306 transducer with a center frequency of 2.25 MHz to capture the high- 133 frequency components from the 0.5 𝜇s excitation pulses. The ultrasound transducer was connected to an Olympus pulse receiver with a low noise amplifier with 60dB of gain. The amplifier is connected to a DSOX4052A oscilloscope as a data acquisition device. The oscilloscope also has a pulse generator that is responsible for providing the excitation waveform for the RF source and the required clock for the data acquisition process. During the scanning process, the imaged sample is attached to a rotating arm that is controlled by a stepper motor and placed on the top of the open-ended waveguide. During the acquisition process, a pulsing trigger is sent to start the RF excitation and the ultrasound acquisition process simultaneously. After finishing each acquisition process, the motor is rotated to collect data from multiple angles around the imaged sample and provide 360-degree coverage. A diagram illustrating the system’s main components is shown in 8.6.a and a picture of the experimental setup is shown in Figure 8.6.b. Agar Cheese inclusions body Sample Cheese holder Body 10 mm (a) (b) (c) (d) Figure 8.6: Experimental system validation. a) Picture of the scanned sample; b) Cross sectional CT image of the scanned sample; c) Reconstructed TAI image without interpolation; d) Reconstructed TAI image with interpolation. To validate the experimental setup, we created a test sample with known internal defects. The sample consists of a rectangular cheese body with two agar inclusions as shown in Figure 8.6.a. This sample was chosen because it has a small number of material density variations while the significant contrast is provided by the difference in EM losses. A CT image of the sample is shown in Figure 8.6.b, and it shows a homogeneous material density across the imaged sample. For the TAI image, the test sample was excited with 0.5 𝜇s microwave pulse with 4.5 kW peak power, and the averaging was set to 1024 on the acquisition side. The data were collected at 50 angles around the imaged target by rotating the target in front of the ultrasound transducer at a rate of 7.2 134 degrees/rotation. The data were then fed to a time-reversal algorithm to reconstruct the structure of the imaged target [124]. The reconstructed TAI image without interpolation of measurement data is shown in Figure 8.6.c. Figure 8.6.d shows the reconstruction with the interpolation of measurement data. Since the agar inclusions have higher water content than the cheese, they experience a higher amount of EM losses and generate stronger TAI signals. As a result, the TAI image shows higher pixel values for the regions where the agar inclusions are located due to the higher amount of EM absorption as mentioned earlier. 8.5 Results and Discussion 8.5.1 NPC on 1-D Signal To test the performance of the proposed approaches, a cheese sample with a single agar core was used to create a test sample with known signal characteristics. A picture of the imaged sample is shown in Figure 8.7.a. The black arrow indicates the direction of the data collection. At the beginning, the sample is imaged with a single pulse approach to have a reference of the shape of the expected TAI signal. The source power was set to 3 kW and the averaging on the oscilloscope was set to 2048 to reduce the noise in the acquired signal. A 1-D signal acquired from the side of the sample is shown in Figure 8.7.b. Region 1 represents the RF coupling to the ultrasound (US) transducer. Region 2 represents the effect of the RF pumping on the US amplifier. Region 3 indicates the front wall of the sample. Region 4 represents agar inclusion. Region 5 represents the signal from the second wall of the sample. And finally, region 6 represents the pressure signal from the reflections inside the imaged sample. We notice that there are no continuous intensity values in the received signal due to the effect of the limited bandwidth of transducers, and such a band-limited transducer acts as a bandpass filter to the received signal. The experiment was repeated with the MPSL25 and complementary code sequences. The raw signal acquired with the MPSL25 excitation is shown in Figure 8.8.a. The resulted signal after applying the mismatched filter is shown in Figure 8.8.b. The figure shows that the signal retains the same features that exist in the original signal with a lower amount of noise. The raw signals acquired 135 with the Manchester coded complementary sequences are shown in Figure 8.9.a and Figure 8.9.b, respectively. The summation of the signals after applying the matched filtering is shown in Figure 8.9.c. The figure shows that the main regions of the signal can be identified but with a large amount of distortion. 10 mm Agar inclusion Transducer Cheese Body (a) (b) (c) Figure 8.7: 1-D TAI signal with single pulse excitation, a) Cheese sample with agar core, b) 1D signal acquired at 3 kW excitation and 2048 averages, c) 1D signal acquired at 3 kW excitation and 2 averages. (a) (b) (c) Figure 8.8: 1-D TAI signal with MPSL25 excitation and a mismatched filter, a) Raw signal acquired with 2048 averages, b) Filtered signal with 2048 averages, c) Filtered signal with 2 averages. (a) (b) (c) (d) Figure 8.9: 1-D TAI signal with Manchester coded complementary pairs excitation, a) First raw signal with 2048 averages, b) Second raw signal with 2048 averages, c) Summation of the correlation function with 2048 averages, d) Summation of the correlation function with 2 averages. To test the performance against noise, we reduced the averaging from 2048 to 2. The direct acquisition results show that it is difficult to identify any of the main regions in the single pulse signal 136 as shown in Figure 8.7.c. The mismatched filtering output shows the improved SNR performance (11dB), and the regions can be identified as shown in Figure 8.8.c. With the complementary Manchester coding, we can see a slight improvement over the single pulse approach, but the performance is lower than that of the matched filters approach as shown in Figure 8.9.d. 8.5.2 NPC on 2-D images In this section, we are showing the effect of using the proposed approach on the reconstructed images. The experimental system in Section 8.4 is used to acquire 2D images of test samples under different configurations. Manchester coded complementary pairs approach requires a longer time than the direct transmission approach because it requires the use of two excitation signals. This approach also requires a slightly more complex source because it requires the RF source to transmit pulses with different pulse widths. On the other hand, the mismatched filter approach uses fixed pulse width; therefore, it only requires control of the pulse start time. The Manchester coded complementary pairs approach also has larger amount of distortion; therefore, we only used the mismatched filter approach for the 2D comparison. The test sample is the same sample that was used for the 1D comparison in Section 8.5.1 and is shown in Figure 8.7.a. Similar to the 1-D comparison, the sample is imaged with a single pulse approach to have a reference of the shape of the expected TAI image. The images were collected with 3 kW peak power and 256 averages. The 2D TAI image of the imaged sample with a single pulse excitation is shown in Figure 8.10.a. The image acquired with the MPSL25 coded excitation is shown in Figure 8.10.b. A 1-D comparison of the reconstructed images along a horizontal line that passes through the center of the agar core is shown in Figure 8.10.c. The imaging results show a successful reconstruction of the edges of the imaged sample, and the agar core can be identified in the middle of the reconstructed frames. Peak signal to noise ratio (PSNR) is used to provide a quantitative measure of the reconstruction image quality when comparing two images. Given two images 𝐼 and 𝐼 ′ with dimensions of 𝑚 × 𝑛, 137 the PSNR is calculated as follows: 𝑀2 𝑃𝑆𝑁 𝑅 = 10 · log10 ( ), (8.21) 𝑀𝑆𝐸 𝑚−1 𝑛−1 1 ∑︁ ∑︁ 𝑀𝑆𝐸 = [𝐼 (𝑖, 𝑗) − 𝐼 ′ (𝑖, 𝑗)] 2 (8.22) 𝑚𝑛 𝑖=0 𝑗=0 where 𝑀 is the maximum possible value of the pixel in the image, 𝐼 is the reference image, and 𝐼 ′ is the noisy image. The reference image was created by repeating the imaging with a single pulse and 2048 averages to ensure low noise levels as shown in Figure 8.10.d. Table 8.1 shows the experiment parameters and a comparison of the PSNR from the single and MPSL25 excitation. The results show that the MPSL25 image has a higher PSNR value than the single pulse image with an improvement of 10.555 dB which agrees with the observations from the direct comparison of the images. -30 -20 x-position (mm) -10 0 10 20 30 -20 0 20 y-position (mm) (a) (b) (c) (d) Figure 8.10: 2-D TAI images of the inspected object: a) Reconstruction with single pulse excitation, b) Reconstruction with MPSL25 excitation, c) 1-D comparison of the reconstructed images along a horizontal line that passes through the center of the agar core, d) Reference image. Table 8.1: Quantitative comparison of reconstruction with single and MPSL25 excitation Method # averages Power (kW) PSNR (dB) Single 256 3 31.58 MPSL25 256 3 42.1350 8.5.2.1 Reduction of the acquisition time The proposed approach can be used to increase the acquisition speed by reducing the system dependence on signal averaging. To demonstrate this feature, an experiment was performed with 138 the same aforementioned system configuration but the averaging was reduced from 256 to 2. The 2-D reconstruction results with the single pulse excitation are shown in Figure 8.11.a. The results show that it is difficult to identify the boundaries of the imaged sample and we can only identify the agar core in the middle of the image. The 2-D reconstruction results for the MPSL25 excitation are shown in Figure 8.11.b. The reconstruction results show that the image still retains all the information about the imaged object. A 1-D comparison of the reconstructed images along a horizontal line that passes through the center of the agar core is shown in Figure 8.11.c. Table 8.2 shows the experiment parameters and a comparison of the PSNR from the single and MPSL25 excitation. The results shows that the image with MPSL25 excitation continues to have a higher PSNR value than the single pulse excitation with a difference of 16.88 dB. (a) (b) (c) Figure 8.11: Comparison of reconstruction with reduced acquisition time with 0.5 𝜇s at 3 kW and 2 averages: a) Single pulse, b) MPSL25, c) 1-D comparison of the reconstructed image at the center of the defect. Table 8.2: Quantitative comparison of reconstruction with reduced acquisition time Method # averages Power (kW) PSNR (dB) Single 2 3 12.5797 MPSL25 2 3 29.4600 8.5.2.2 Reduction of the system peak power With constant excitation pulse width and fixed acquisition speed, we can reduce the system peak power to enable the usage of low peak power RF sources. To demonstrate this feature, the system power was reduced from 3 kW to 1.5 kW and the averaging was kept at 256. The 2-D reconstruction results for the single pulse and MPSL25 excitation are shown in Figure 8.12.a and 139 8.12.b, respectively. A 1-D comparison of the reconstructed images along a horizontal line that passes through the center of the defect is shown in Figure 8.12.c and the quantitative comparison is shown in Table 8.3. The results show that the image reconstruction with MPSL25 excitation continues to have a better performance than the single pulse excitation and the PSNR value is improved by 11.32 dB. (a) (b) (c) Figure 8.12: Comparison of reconstruction with reduced peak power excitation with 0.5 𝜇s pulse width at 1.5 kW: a) Single pulse, b) MPSL25, c) 1-D comparison of the reconstructed image at the center of the defect. Table 8.3: Quantitative comparison of reconstruction with reduced excitation power Method # averages Power (kW) PSNR (dB) Single 256 1.5 13.7996 MPSL25 256 1.5 25. 1262 8.5.2.3 Increasing the system spatial resolution while maintaining high SNR Spatial resolution in TAI systems is highly dependent on the pulse width of the excitation signal. The generated TAI signal represents an integration of a sphere with a volume that is proportional to the width of the excitation pulse. Therefore, reducing the pulse width increases the system spatial resolution but at the same time reduces the amount of energy injected into the system which results in a reduced SNR. Here, we demonstrate that with the coded excitation, we can increase the imaging spatial resolution without sacrificing the system SNR in low-power pulsed TAI systems. To validate that experimentally, a new sample with a cylindrical agar body and two metallic inclusions was created. Figure 8.13.a shows a schematic of the structure of the imaged target. The agar sample has a diameter of 27.3 mm and the inclusions are metal wires with a diameter of 0.6 mm. In the 140 beginning, the sample was imaged with high peak power excitation of 3 kW and 0.5 microseconds to serve as a reference for the low power results and the imaging results can be seen in Figure 8.13.b. 0.6 mm Agar wires 10 mm (a) (b) Figure 8.13: a) Structure for the agar sample, b) TAI image with 0.5 𝜇s single pulse and 3 kW excitation. (a) (b) (c) (d) Normalized pressure (a.u) 1 Ref. Single 1 s Single 0.5 s MPSL25 0.5 0 -0.5 -1 -20 0 20 y-position (mm) (e) Figure 8.14: Comparison of reconstruction with different pulse widths: a) 2 𝜇s single pulse with 2 kW excitation, b) 1 𝜇s single pulse with 2 kW excitation , c) 0.5 𝜇s single pulse with 2 kW excitation, d) 0.5 𝜇s MPSL-25 coded pulses with 2 kW excitation, e) 1-D comparison of the reconstructed images along a line that passes through both of the wires. In this image, we can identify both the wire inclusions, and we can identify the boundary of the agar sample. The power was then reduced to 2 kW to simulate a low-power RF system. We started the demonstration by performing imaging with a 2-microsecond pulse and the imaging results are shown in Figure 8.14.a. We notice that we can only identify a single wire in the image but with an enlarged footprint when compared to the reconstruction in Figure 8.13.b. We then reduced the 141 pulse width to 1 microsecond, and the imaging results can be seen in Figure 8.14.b. Reducing the pulse width improves the system’s spatial resolution and both of the wires can be identified but with an increased amount of noise. Further reducing the pulse width to 0.5 microseconds as shown in Figure 8.14.c leads to a deterioration in the image SNR where it is difficult to identify the edges of the agar sample due to a large amount of noise. Finally, the coded excitation with MPSL25 was used with 0.5-microsecond pulse width and the results are shown in Figure 8.14.d. The results show that the proposed approach can improve the system’s spatial resolution while maintaining adequate SNR. A 1-D comparison along a line that passes through the two inclusions is shown in Figure 8.14.e. The figure includes a comparison between the image in Figure 8.13.b as a reference (Ref.) and the reconstruction results in Figure 8.14.b, Figure 8.14.c, and Figure 8.14.d. 8.6 Conclusion In this chapter, we proposed to use noncoherent pulse compression to improve the signal-to- noise ratio in pulsed TAI systems. The proposed approach can be applied to the current pulsed TAI systems without the need for linear radio frequency amplifiers or major system modifications. Experimental results showed that the proposed method highly enhances the received signal SNR when compared with a pulsed TAI system with the same peak power and the number of averages. Mismatched filters and complementary Manchester coded sequences were evaluated in this paper, and the results showed that mismatched filters provided better performance, shorter acquisition time, and less complex RF source requirements than complementary Manchester coded sequences. 1-D experimental results with mismatched filters showed that the system SNR was improved by 11 dB when using a code sequence with a length of 25 elements. The received signal maintained having similar features to the original signal but with higher SNR. To validate the imaging capability with the proposed approach, we built a cylindrical imaging system. 2-D experimental results showed that the proposed approach could be used to reduce the acquisition time of the TAI signal, employ a microwave source with low peak power, or improve the spatial resolution of low peak power systems while maintaining an adequate level of SNR. 142 CHAPTER 9 INSPECTION OF BUTT FUSION JOINTS IN POLYETHYLENE PIPELINES 9.1 Introduction Polyethylene pipe has been used extensively for water, sewer and natural gas transmission for over five decades [141]. Pipe joints are used frequently to create long pipe sections with butt-fusion being the most commonly used method. Concerns arise from the fact that there are currently no reliable NDE methods to evaluate the integrity of those joints [141]. The current weld inspection procedures and guidelines depend mainly on visual inspection. Although visual inspection has proven to be a successful method for the inspection of PE butt-fusion welds, several limitations remain. There are cases where the joints pass all the visual inspection guidelines and still fail due to low levels of contamination or low tie molecule density. The project seeks to identify, evaluate, develop and implement multiple nondestructive evaluation (NDE) techniques that can aid in detection/characterization of defects/damage in reference butt-fusion specimens. The project is targeting the detection of weak joints that are affected by having low tie molecule density across the joint interface. The motivation for this work is that we are seeing evidence of field failures due to these causes across the gas and water distribution networks. There is little evidence that current NDE methods can reliably detect the low bonding strength across the joint interface. Results in this field and reported research efforts typically focus on the detection of artificially produced volumetric defects to identify poor fusions, however, a recent study has indicated that there is no evidence suggesting these volumetric defects influence the strength of the joints [23]. To improve the reliability of defect measurements, and improve existing NDE inspection techniques, this chapter evaluate emerging NDE methods for fast and cost-effective defect detection. 143 9.2 State-of-the-Art Butt-Fusion Inspection Methods Different NDE methods have been proposed to evaluate the integrity of the butt-fusion welds. Ultrasonic testing (UT) based techniques have been widely proposed for this task due to their sensitivity to changes in material mechanical properties. UT-based techniques use the difference in the speed of sound as a function of modulus and density and reflections of acoustic waves at the interfaces between different materials. Time of flight, diffraction, phased arrays, and chord ultrasound are some of the reported methods that are targeting the evaluation of butt-fusion welds integrity. For the time of flight diffraction, waves are sent through the joint and the time of flight of received waves is recorded. The waves are sent at angles and received at the same angle from the other side of the bead, therefore, wedges are needed to transmit the wave at the specified angle as shown in Figure 9.1.a [2]. This method enables volumetric inspection without raster scanning but a material with low attenuation and slow sound speed is needed to design efficient wedges [141]. For phased arrays, UT waves are sent from one side of the joint and the beam is steered electronically as shown in Figure 9.1.b (angled wedges are also required). The defects are detected by recording the reflected waves from the defect boundary. In the case of chord UT, the waves are transmitted on an axial plane of the joint that is positioned on a chord of a cross-section of the pipe as shown in Figure 9.1.c. Currently published research show a good performance for UT-based techniques in detecting volumetric defects in the welded region [141, 142]. The main challenges for UT inspection of polyethylene can be summarized as follows: Low acoustic velocity (2.26-2.31 m/s), high attenuation (3.2-4.3 dB/cm at 2.25 MHz). PE material exhibits small amount of anisotropy when testing in different directions [143]. Another factor that needs to be considered is that acoustic velocity is dependent on temperature, therefore, cooling of welds to ambient temperature is needed before performing the inspection process [143]. Other reported studies suggest the use of electromagnetic-based methods [5]. In this case, the welds are tested by sending electromagnetic waves through a probe and measuring the amount of reflected electromagnetic signal as shown in Figure 9.1.d [144]. This type of probe is highly sensitive to the changes in the dielectric properties of the material and can penetrate easily through the pipe wall since polyethylene has very small 144 attenuation to electromagnetic waves. The reported results indicate that electromagnetic (EM)- based methods are highly sensitive to the defects in the HAZ but they suffer from a high number of false positives [145]. Figure 9.1: Ultrasound-based inspection method, a) Time of flight diffraction [2], b) Phased array UT [3], c) Chord UT [4], d) EM-based scanner [5]. 9.3 Structure of Materials in the Heat Affected Zone Butt-fusion involves the simultaneous heating of the ends of two pipes which are to be joined until a molten state is attained on each contact surface. The two surfaces are then brought together under controlled pressure for a specific cooling time, and a homogeneous fusion joint is formed. The quality of a joint is affected by many factors that include, but are not limited to, the welding temperature, the melting point of the plastic, contamination, and welding pressure. Due to the large thermal gradient, there is a possibility of row nucleation at the joined surfaces as different parts of the weld are cooled at different rates in the HAZ. The heat distribution in HAZ is shown in Figure 9.2.a. Approximate description of the joint region structure is shown in Figure 9.2.b [146, 147]. Joints created under optimum conditions will only show zones 2, 3, 4, and 5, while the appearance of zone 1 is an indication that the weld is unsatisfactory. Electrically, trap sites for electrons or ions exist on the boundary between crystalline and amorphous parts of polyethylene [6]. Mechanical 145 drawing or isothermal crystallization of the bulk from its melt provides increased trap sites due to a change of morphology in crystalline lamellae. Therefore, the HAZ will have different dielectric properties from the rest of the pipe, and electromagnetic-based methods can be a good candidate for the detection of weak joints. Figure 9.2: a) Temperature distribution during the final stage of welding, b) Expected microstruc- tures after completion of the final stage of welding [6]. Figure 9.3: Dogbones and their corresponding locations. 9.4 Reference Butt Fusion Specimens A group of samples with varying joint quality was fabricated by GTI to serve as a reference during the sensor development. To obtain ground-truth data for the butt-fusion strength, dogbone specimens were punched at eight locations along the circumferential direction at the fusion interface. 146 High-Speed Low-Temperature Tensile (HSLTT) testing was conducted to evaluate the strength of eight dogbones, which is used to determine the strength of the fusion joint. In the following sections, those eight dogbones are denoted using their radial angles relative to position one as 0, 45, 90, 135, 180, 225, 270, and 315. Similarly, dogbone samples in between are represented as 22, 67, 112, 157, 202, 247, 292, and 337. The extracted dogbones and their corresponding locations are shown in Figure 9.3. The fabricated samples are divided into four groups. The first group represents pipes with perfect joints with a testing success rate of 100 % (8/8 good dogbones). The second group represents a group of pipes that have bad joints with a testing success rate of 0% (0/8 good dogbones). The third and fourth groups represent pipes that have moderate joint qualities with a testing success rate that ranges from 62.5% to 25% (5/8 to 2/8 good dogbones). The provided samples to MSU were replicas of these samples that had been created under the same conditions to ensure the same joint quality. A picture of the provided test samples and a table of their specifications are shown in Figure 9.4.a and Figure 9.4.b. In addition to the pipe sections, 36 dogbone samples (14 good fused dogbones, 14 bad fused dogbones, 8 non-fused dogbones as a reference) are provided by GTI to Figure 9.4: a) Test samples , b) Test samples specifications. 147 serve as a reference during the sensor development. The dogbone samples also provide two-side access to the tested sample. Figure 9.5: Perkin Elmer Quantum GX Micro CT Imaging System [7]. 9.5 Micro CT Scanning X-ray computed tomography is an imaging method that uses very high-frequency electromag- netic radiation with a wavelength in the order of one angstrom and photon energies in the order of thousands of electron volts [148]. Due to this very small wavelength, Micro CT can create very high-resolution 3D inspection of scanned objects. The X-ray photons are sent through the scanned material, and the final image is formed by measuring the amount of attenuation to those photons. The x-ray attenuation coefficient is a function of the material’s properties. Two of the most important properties that affect the attenuation coefficient are material density and the atomic number Z. Therefore, the Micro CT scanning results are expected to provide an idea about the material density distribution across the fused joint region. The amount of attenuation suffered by the x-ray radiation is governed by Beer-Lambert law. Therefore, for a uniform material with thickness l, the intensity of the outgoing beam 𝐼𝑜𝑢𝑡 is related to the intensity of the incoming beam 𝐼𝑖𝑛 by the following relation 𝐼𝑜𝑢𝑡 = 𝐼𝑖𝑛 𝑒 𝜇𝑙 (9.1) This simplified relation is only valid when the material is homogeneous, and the x-ray photons in the beam have the same energy. During the samples evaluation, a commercial micro-CT system was 148 used to scan the test samples. The used system is Perkin Elmer Quantum GX Micro CT Imaging System. A picture of the system is shown in Figure 9.5. The system produces a dense 3D point cloud with a size of 512 × 512 × 512 points for each scanned sample; therefore, the spatial resolution will vary according to the size of the scanned object. A calibration sample with known defects was tested in the CT system to evaluate the system’s performance in detecting flaws in polyethylene. The sample is an injection molded test vessel (cap) that was provided by GTI. A picture of the sample is shown in Figure 9.6.a. Two slices from the Micro CT scan is shown in Figure 9.6.b. The marked regions indicate some of the flaws’ locations in the scanned sample. Since the initial testing results were promising, we moved with scanning the rest of the samples with the fusion joints. Figure 9.6: Scanning of reference plastic cap, a) Picture of the tested sample, b) Two Micro CT sliced with marked defects. 9.5.1 Pipe Sections For the pipe sections, the samples were scanned with a spatial resolution of 0.144 mm, kVp of 90, and 240 seconds of scanning time. Since the system provides a 3D point cloud, the data is converted into 2D to simplify the visualization and data analysis. The conversion is performed by calculating a 4 mm average of material density in a radial direction within the pipe wall (The pipe wall thickness= 5.6 mm). This transformation calculates the average density of the particles within the pipe wall only and does not include the beads. The process starts by choosing a slice in the vicinity of the heat affected zone as shown in Figure 9.7.a. Polar transformation is used to 149 convert the CT image into cylindrical coordinates as shown in Figure 9.7.b. A global threshold is calculated by using the image histogram, and mask representing the pipe wall is created as shown in Figure 9.7.c. The created mask is then eroded in order to preserve only the internal 4mm inside the pipe, as shown in Figure 9.7.d. The mask is multiplied by the original CT image in order to extract the desired part of the data from the image, as shown in Figure 9.7.e. The average value of the CT image is calculated along the 𝜌 axis to get a 1D line from each CT slide. The 1D lines are stitched together to form a 2D average image as shown in Figure 9.7.f. This process guarantees that the algorithm results represent the data inside the pipe and automatically deals with the oval shape of the pipe. Figure 9.7: Conversion to polar domain. By careful examination of the images, we noticed a systematic error in all the slices where the intensity of the pixels is increased as we go from the bottom to the top as shown in Figure 9.8.a. A line plot showing the intensity change along the y-direction is shown in Figure 9.8.b. By examining the rest of the samples, this change in the signal was found to be a systematic error in the CT machine that is not related to the change in the material density of the scanned object. To compensate for this effect, the average of the intensity of the image along the z-direction forms a 1D line representing the average value along the 𝜙 direction that is then subtracted from each row in the image. The CT average images for sample 56AB before and after error compensation are 150 shown in Figure 9.9.a and Figure 9.9.b respectively. Figure 9.8: Irregularities from the CT machine, a) Single CT slice, b) 1-D plot at 𝑥 = 38 mm. Figure 9.9: Irregularities correction with curve fitting, a) Orginal CT average data, b) Corrected CT data. The cleaned data for the first, second and third batch are shown in Figure 9.10.a and Figure 9.10.b. In the scanning results, the HAZ can be clearly identified (at 𝑧 = 20 mm) because it has lower values than the rest of the pipe section. The scanning results also show clear boundaries that separate the HAZ from its neighbor regions. 151 (a) (b) Figure 9.10: a) data of the first and second batch, b) CT data of the third batch. 9.5.2 Dogbones The dog bone samples were scanned under the same configuration as the pipe samples but with a longer scanning time of 840 seconds in order to decrease the amount of noise in the CT images. The scanning results of the non-fused dog bones are shown in Figure 9.11. The scanning results of good dog bones are shown in Figure 9.12.a and Figure 9.12.b. The scanning results of bad dog bones are shown in Figure 9.13.a and Figure 9.13.b. As in the case of the pipe section, we can notice that the heat affected zone can be identified because it has a lower value than the rest of the pipe (vicinity of 𝑧 = 38 mm). 152 Figure 9.11: Micro CT scanning of Non-fused samples for angles: 0, 45, 90, 135, 180, 225, 270, 315 from top to bottom. (a) (b) Figure 9.12: a) Micro CT scanning of good samples for angles: 0, 45, 90, 135, 180, 225, 270, 315 from top to bottom, b) Micro CT scanning of good samples for angles: 67, 112, 157, 247, 292,337 from top to bottom. (a) (b) Figure 9.13: a) Micro CT scanning of bad samples for angles: 0, 45, 90, 135, 180, 225, 270, 315 from top to bottom, b) Micro CT scanning of bad samples for angles: 67, 112, 157, 247, 292,337 from top to bottom. 153 9.6 Microwave Frequency Scanning Electromagnetic signals with frequencies between 300 MHz and 300 GHz, corresponding to vacuum wavelengths between 1 m and 1 mm, respectively [8]. Electromagnetic waves at these frequencies can penetrate deeply inside dielectric materials due to their low EM losses and small dielectric constant [8]. When the wave travels toward a boundary, it either reflects back to the same medium or passes through the medium without any reflection, or it can have a partial reflection from the boundary. The reflection and transmission are influenced by the wave impedance of the material, its geometry, its size, and the parameters of its constituents. In the transmission case, as the signal travels into a material, the signal maintains the same frequency, but the wave velocity and wavelength change. The variation of the velocity and wavelength suggests that a certain phase change must take place. Additionally, the material absorbs some of the energy in the signal, which causes the amplitude to change. Each material is characterized by a unique, complex dielectric constant 𝜀𝑟 relative to free space or a vacuum. The real part of the dielectric constant is a measure of the material’s ability to store the incident electric energy. The imaginary part of the dielectric constant represents the loss factor and indicates the material’s ability to absorb the incident electric energy. Commonly, there are three regions around a microwave antenna or sensor as explained in Figure 9.14.a. • Near field region: the region is dominated by reactive (stored) fields • Radiating near-field region: the region is dominated by both reactive and radiating fields • Far-field region: the region is dominated radiating field where plane-waves exists Microwave testing can be performed in either the near field or the far field. The near-field approach uses simple probes such as open-ended waveguides and coaxial lines, whereas the far- field approach requires an antenna to focus the microwave energy. Far-field testing does not offer good spatial resolution due to the large wavelength at the microwave frequencies. Focusing lenses are often used to remedy this problem. In this study, we are opting to use near-field microwave 154 scanning sensors because they provide a high spatial resolution that is dependent on the probe geometry. Using far-field approaches requires the use of much higher frequencies to achieve similar spatial resolution. (a) (b) Figure 9.14: a) Field regions around an antenna [8], b) A schematic of a near field inspection system. Figure 9.15: Microwave Scanner for Butt Fusion Pipelines with metamaterial sensor attached to it. A typical schematic of a near field microwave scanning system is shown in Figure 9.14.b. The system is designed to send an EM wave at a specific frequency and then monitor the reflections from the probe. In this study, three different near-field microwave probes are used, namely: coaxial cable probe, split ring resonator, and open-ended waveguide. The probes are attached to a rotary scanning platform that has been developed with the capability to produce 1D data along the z-direction for each scanned angle. A picture of the setup is shown in Figure 9.15. 155 9.6.1 Coaxial Cable The probe is a coaxial cable with a short tip that extends from its end in a similar configuration to a monopole antenna. The local interaction of the scanned material with the electromagnetic fields near the probe tip results in a change in the probe impedance and, therefore, a change in the amplitude and phase of the reflected signal from the probe. 9.6.1.1 Pipe Sections The scanning platform was evaluated by scanning two pipe sections with no fusion region. The scanner is set to scan 35 mm in the z-direction and perform 100-degree rotation. The scanning results from the probe are shown in Figure 9.16. The scanning results of the two samples show that the sensor is sensitive to the liftoff distance from the probe and the intensity changes in the images are related to the fluctuation in the diameter of the scanned pipes. In order to reduce the effect of the liftoff distance, the beads of the joints were removed for samples 56CD, 58CD, 60CD, and 62CD. The other samples were left intact to check the possibility of finding the weak fusions with the existence of the beads. The scans were performed by scanning 40mm distance in the neighborhood of the beads with a spatial resolution of 0.5 mm. The probe scanned 360 degrees with a spatial resolution of 3.6 degrees. The location of the beads is always centered in the middle at 𝑧 = 0. Initial postprocessing was performed by subtracting the mean of the sensor readings along the z-direction for each angle in order to compensate for the effect of the change in liftoff distance. Figure 9.16: Scanning results of pipe sections with coaxial cable probe. 156 Figure 9.17 shows the effect of mean subtraction to enhance the EM scanning images for sample 57AB. The post-processed scanning results for the first and second batch are shown in Figure 9.18. The postprocessing improves the image’s contrast, and the bead region can be extinguished better at 𝑧 = 20 mm for samples 56AB, 58AB, 60AB, and 62AB. For the samples with removed beads (56CD, 58CD, 60CD,62CD), the HAZ effect can still be seen, especially in 58CD, 60CD, and 62CD, but it has less contrast due to the reduction of the effect of the liftoff distance. To have a more thorough investigation, the beads of the second batch (56AB, 58AB, 60AB, 62AB ) were removed, and the scan was repeated with the results shown in Figure 9.19. The third batch pipes were scanned without removing the beads due to some concerns that the beads removal process at MSU may have affected the destructive testing results of the first and second batch. The scanning results for the third batch with intact beads are shown in Figure 9.20. The beads region is clearly visible in these scans at 𝑧 = 20 mm. For some of the samples, even the structure of the beads can be identified as in the case of 63AB and 63CD. Figure 9.17: Effect of subtracting the mean value to compensate for the fluctuation in lift off distance for sample 57AB a) Original data, b) Data after post-processing. 157 Figure 9.18: Coaxial cable scanning results of the pipe sections after post-processing for first and second batches with intact beads. 9.6.1.2 Dogbones The dogbones presented more controlled samples where the liftoff fluctuations are more consistent across different dogbone samples. Sixteen dogbone samples were scanned with the coaxial probe by performing a 10 mm line scan at the middle of each dogbone following the dotted line in Figure 9.21.a. Figure 9.21.b shows a 1D line scan comparison between the good and bad dogbone samples. In this case, the sensor is affected by the thickness of the dogbone, which increases as we move away from the bead, and the effect of liftoff distance due to the geometry of the sample. 158 Figure 9.19: Coaxial cable scanning results of the second batch with removed beads after post- processing. 9.6.2 Open-Ended Waveguide The probe is a WG16 rectangular waveguide with an open end. The field near the open end of the probe locally interacts with the material of the scanned sample and results in a different amount of reflection for a different material. The open-ended waveguide has a wider aperture than coaxial cable, which means that it can integrate the effect of the dielectric constant from a larger local area. 9.6.2.1 Pipe Sections A similar scanning procedure to the coaxial cable scanning was followed for the open-ended waveguide. Initial postprocessing was performed by subtracting the mean of the readings along the z-direction for each angle. The post-processed scanning results for the first and second batches are shown in Figure 9.22 (first batch with no beads and second batch with beads). The scanning results show less sensitivity to the beads’ locations, but we can still notice a slite dip in the sensor reading around 𝑧 = 20 mm for samples (56AB, 58AB, 60AB, 62AB ) with beads. The scanning results for the second batch after removing the beads are shown in Figure 9.23. In this case it is hard to locate the heat affected zone dure to the removal of the beads. The scanning results for the third batch are shown in Figure 9.24. The existence of the beads can be identified more clearly for this batch around 𝑧 = 20 mm. 159 Figure 9.20: Coaxial cable scanning results with intact beads after post-processing for the third batch. The scanning system was upgraded during the scanning of the third batch to use the configuration shown in Figure 9.25. The new setup has the ability to perform phase and magnitude measurements simultaneously. This capability was added to the system by splitting a part of the reflected power and measuring it by using an active power detector device. The scanning process is performed over 60mm in the vicinity of the beads, and then the pipe is rotated to cover 360 degrees around the pipe. A similar postprocessing procedure to the one used for single-channel scanning was followed to reduce the effect of the liftoff distance. The scanning results for the third batch after postprocessing are shown in Figure 9.26. and Figure 9.27. The results show that the region of the bead identified around 𝑧 = 30 mm. The contrast of the bead region is different across the samples, and this can be related to the different liftoff distances during the scanning process. 160 Figure 9.21: Scanning of the dogbone samples with coaxial cable probe. 9.6.2.2 Dogbones Open ended waveguide sensor was not used for scanning dogbones due to the large aperture of the open-ended waveguide sensor. 9.6.3 Split Ring Resonator The idea of having material with a negative refractive index was proposed for the first time by the Soviet physicist Victor Veselago in 1968 [149]. Materials may have a negative refractive index when both the permittivity and the permeability of the material are negative. Veselago hypothesized that all the laws of electromagnetic wave propagation will change when dealing with such materials. Realizing such material naturally proved to be impossible at least until now [150]. It wasn’t until 30 years after Veselago’s work that the first material with a negative refractive index was artificially fabricated. This material was realized using artificial periodic homogeneous structures that are called metamaterials. Metamaterials are composites that consist of subwavelength periodic structures with different shapes, sizes, and orientations that are responsible for deciding their properties. The dimensions of these periodic structures should be a fraction of the targeted wavelength in order to achieve the required material behavior. Therefore, the properties can be changed by modifying structure shape, size, and orientation. This modification leads to a change in the overall permittivity and permeability of the structure. These negative values can lead to the introduction of unusual material properties like negative refraction, backward wave propagation, 161 Figure 9.22: Open-ended waveguide scanning results for the pipe sections after post-processing. and slow and fast waves. One of the popular methods to create a negative refractive index is by using composite surfaces made of split ring resonators (SRRs) and metallic strips where SRR is responsible for the negative permeability and the metallic strips are responsible for the negative permittivity [150]. Metamaterial has been used in different shapes for various applications, and that includes 1-D metamaterial transmission lines and two-dimensional meta surfaces. They can even create 3-D structures, like in the case of MRI lenses. In this report, we are focusing on the role of metamaterials as microwave sensing elements. Split ring resonators (SRR) are metamaterial near-field microwave probes that have a highly concentrated EM field in the vicinity of the scanning probes. The resonance nature of the device makes it highly sensitive to the change in the dielectric properties in its vicinity. An image of an SRR sensor is shown in Figure 9.28. The sensitive part of 162 Figure 9.23: Open-ended waveguide scanning results with removed beads after post-processing for second batch. the sensor is represented by the resonance circles in the middle of the sensor. The RF signal is fed from one of the SMA ports (the sensor is symmetric) while the other port is left open. The sensor is interrogated in a similar manner to the other near field probes like coaxial cable and open-ended waveguides, where it depends on measuring the magnitude and phase of the reflected waves from the sensor tip. 9.6.3.1 Pipe Sections This type of sensor was used later during the project progress; therefore, only the third batch was scanned with this sensor. The scan was performed in a similar configuration to the coaxial cable probe scan and with a similar postprocessing procedure. The scanning results of the third batch after postprocessing are shown in Figure 9.29. The results are mainly dominated by the effect of standing wave patterns, but a drop in the sensor reading can be noticed at 𝑧 = 20 mm, and it is related to the existence of the beads. 9.6.3.2 Dogbones The sensor has a large footprint; therefore, it was not possible to perform line scans similar to the coaxial probe case. Therefore, another experiment was performed where three-point measurements 163 Figure 9.24: Open-ended waveguide scanning results for the pipe sections after post-processing third batch. were acquired by using the sensor shown in Figure 9.30.a. The first measurement is on the bead (OB) and the second measurement is away from the bead (AB). The third measurement was taken in the air to serve as a reference for the other two measurements. The test results are shown in Figure 9.30.b. GJ means good joint, and BJ means bad joint. The results show that there is a shift in the reading when moving from air to polyethylene. The results also show that there is a shift in the reading when moving between two different dogbone samples and when comparing the OB and AB for both the bad and the good samples. 9.7 Capacitive Sensors The sensor is a two-plate capacitor that can be fabricated in different geometries to direct the electric field to the region of interest. The sensor is mainly sensitive to the material dielectric 164 Figure 9.25: Microwave setup with added amplitude reflection measurement. changes in the region between the plates where the electric fields are concentrated. A schematic of the capacitive sensor schematic is shown in Figure 9.31. Contrary to the microwave near field probes, this sensor works at a much lower frequency at 5 MHz, and instead of performing 2-D raster scanning of the joint region, this sensor needs a single point for each scanned angle. The experimental setup of the system is shown in Figure 9.32.a. The setup uses a two-plate capacitor that is connected to a resonant circuit, and the results are collected in a similar manner to the near field microwave probes. The results of scanning a good sample (56AB) are shown in Figure 9.32.b. The fluctuations in the signal are expected to be a result of the pipe diameter fluctuation. A three-point measurement test similar to the one performed with SRR was repeated with a new capacitive sensor that is shown in Figure 9.33.a. The testing results are shown in Figure 9.33.b. The results show that there is a shift in the reading when moving from air to polyethylene. The results also show that there is a shift in the reading when moving between different dogbone samples. In this test, it can be seen that it is not possible to differentiate between the reading on or away from the bead for both good and bad samples. This result can be related to the reduced sensitivity of the capacitive sensor because it is working at a much lower frequency than SRR (4.5 MHz vs 2.4 GHz). 165 Figure 9.26: Dual channel open-ended waveguide scanning results for the pipe sections third batch for samples (57AB, 57CD,59AB, 59CD). 9.8 Optical Transmission Scanning (OTS) Optical transmission scanning is an NDE method that uses visible laser light to inspect the material. The light is passed through the sample, and the amount of the attenuation in the light beam is monitored from the other side of the sample in a configuration similar to the schematic shown in Figure 9.34. The system’s working principle is similar to X-ray, and the attenuation of the light beam can be explained by the Beer-Lambert law. The light transmission through the medium is given by 𝐼𝑟 = 𝐼𝑖 𝑒 −𝛼𝑑 (9.2) • 𝐼𝑟 =recieved light intensity from the photo detector • 𝐼𝑖 =incident light intensity from the laser source 166 Figure 9.27: Dual channel open-ended waveguide scanning results for the pipe sections third batch for samples (61AB, 61CD,63AB, 63CD). Figure 9.28: Split ring resonator sensor. • 𝛼=attenuation factor • 𝑑=optical path length This simplified relation is only valid when the material is homogeneous. The light transmission is affected by refractive index, absorption, and scattering inside the tested material; therefore, OTS 167 Figure 9.29: SRR scanning results for the pipe sections of the third batch after post-processing. is sensitive to changes in material structure. 9.8.1 Pipe Sections The OTS scanning system was not used to scan the pipe sections because the system requires access to both sides of the scanned sample. 9.8.2 Dogbones The system uses a red laser with a wavelength of 690 nm and a laser dot with a diameter of 1.5 mm. Initial tests were conducted with a power of 4 mW. Four samples were tested, two were good, and two were bad, as shown in Figure 9.35.a. The sequence of the scanned samples from top to bottom 168 Figure 9.30: Three-points measurement with a metamaterial sensor, a) Metamaterial sensor, b) Sensor measurement under a wide range of frequencies. Figure 9.31: Capacitive sensor schematic. Figure 9.32: Capacitive sensor, a) Capacitive sensor prototype, b) Scanning results of 56AB. is good, bad, good, and bad. The scanning results are shown in Figure 9.35.b. A 1D comparison of line measurement that was acquired in the middle of each dogbone sample is shown in Figure 9.36. The results show that the good samples have less attenuation than the bad samples in the joint region. G0 means a good sample at angle zero. B0 means a bad sample at angle zero. This naming schematic will also be followed to identify the samples in the forthcoming scanning results. In order to achieve higher spatial resolution images, a pinhole was added to the system so that the system could only detect perpendicular photons from the laser source. The laser power was raised to 30 mW in order to compensate for the lost power due to the pinhole. The scanning 169 Figure 9.33: Three-points measurement with a capacitive sensor, a) Capacitive sensor prototype, b) Sensor measurement under a wide range of frequencies. Figure 9.34: Optical transmission scanning schematic. results of the good and bad dogbone samples are shown in Figure 9.37, Figure 9.39, Figure 9.41, and Figure 9.43. A 1D comparison of 2.25 mm average at the center of each dogbone is shown in Figure 9.38, Figure 9.40, Figure 9.42, and Figure 9.44. The 1D results were aligned in order to be able to correlate the intensity of the received light at the HAZ. The scanning results and a 1D comparison of 2.25 mm average at the center of each dogbone for the non-fused dogbones are shown in Figure 9.45 and Figure 9.46. The OTS scans with the pinhole provide better results when checking the spatial resolution of the scanning results. These higher resolution images enable us to look for some potential patterns that can differentiate between the sample instead of depending on only comparing the values of laser beam intensity at the HAZ. 170 Figure 9.35: OTS scanning for samples B0, G0, B45, G45 from left to right, a) Picture of the scanned samples, b) OTS scanning results. Figure 9.36: 1D comparison of the intensity of the laser beam at the middle of the scanned samples for B0, G0, B45, G45. Figure 9.37: OTS scanning of dogbones for G0, B0, G45, B45, G90, B90, G135 from left to right. 171 Figure 9.38: A 1D comparison of 2.25mm average at the center of each dogbone for G0, B0, G45, B45, G90, B90, G135. Figure 9.39: OTS scanning of dogbones for G180, B180, G225, B225, G270, B270, B315. Figure 9.40: A 1D comparison of 2.25mm average at the center of each dogbone for G180, B180, G225, D225, G270, B270, B315. Figure 9.41: OTS scanning of dogbones for G67, B67, G112, B112, G157, B157, G247, B247. 172 Figure 9.42: A 1D comparison of 2.25mm average at the center of each dogbone for G67, B67, G112, D112, G157, B157, G247, B247. Figure 9.43: OTS scanning of dogbones for G247, B247, G292, D292, G337, B337. Figure 9.44: A 1D comparison of 2.25mm average at the center of each dogbone for G247, B247, G292, D292, G337 and B337. Figure 9.45: OTS scanning of the non-fused dogbone samples for angles 0, 45, 90, 135, 180, 225, 270 and 315. 173 Figure 9.46: A 1D comparison of 2.25mm average at the center of each nonfused dogbone for angles 0, 45,90, 135, 180, 225, 270 and 315. 9.9 Data Analysis This section discusses the post-processing of the feature extraction procedures that were followed to analyze the data. The current analysis results are for the Micro CT data only while the other data analyses are still in progress. 9.9.1 Segmentation of HAZ The heat affected zone, the beads’ location, and their width vary with the change of the scanning angle, as shown in Figure 9.47.a. The CT data provides a reference with both the 3D geometrical information and the density of the material inside the scanned sample. Therefore, it has been used to create a mask to extract the HAZ by segmenting the beads. Figure 9.47: a) Single slice of the polar CT data, b) Thresholded mask that defines the pipe edges. The segmentation process is performed as follows: 174 • Calculate the location of the edges of the pipe wall by using a slice that is 8 mm away from the beads. • Calculate 2.16 mm average above the pipe surface for 360 degrees and then correct for the irregularities in the average image. Figure 9.48: A schematic explaining the region used for averaging. • Calculate a global threshold from the average image histogram and perform binarization followed by image filling and cleaning. • The mask has dimensions of 286 × 720 while the EM scanning files are limited to 80 × 100; therefore interpolation is used to upsample the scanning data to match the mask dimensions. • The mask is used as follows: – Direct multiplication with the data to define the HAZ and extract its average. – Alpha layer to create an overlay for the HAZ (The HAZ is set to have four times the magnitude of the rest of the image). Figure 9.49: a) Averaged image, b) Averaged imaged after correction. 175 Figure 9.50: a) Histogram of the averaged image, b) Final overlay mask. Figure 9.51: a) Original EM data, b) Overlay of the CT image over the EM data. 9.9.2 Pearson Correlation Coefficient In statistics, the Pearson correlation coefficient is a measure of the linear correlation between two random variables, X and Y. The values of the coefficient range from −1 to 1, where 1 represents a total positive linear correlation, 0 is no linear correlation, and −1 is total negative linear correlation. Pearson’s correlation coefficient is the covariance of the two variables divided by the product of their standard deviations. For a pair of random variables (𝑋, 𝑌 ), the correlation coefficients (𝜌 𝑋,𝑌 ) can be calculated as follows: 𝑐𝑜𝑣(𝑋, 𝑌 ) 𝜌 𝑋,𝑌 = (9.3) 𝜎𝑋 𝜎𝑌 Where 𝑐𝑜𝑣 is the covariance, 𝜎𝑋 is the standard deviation of 𝑋 and 𝜎𝑌 is the standard deviation of 𝑌. 9.9.3 Distance Correlation Distance correlation or distance covariance is a measure of dependence between two paired random vectors of arbitrary, not necessarily equal, dimension. The population distance correlation coeffi- 176 cient is zero if and only if the random vectors are independent. Thus, distance correlation measures both linear and nonlinear association between two random variables or random vectors. This is in contrast to Pearson’s correlation, which can only detect the linear association between two random variables. The distance correlation of two random variables 𝑋and 𝑌 is obtained by dividing their distance covariance (𝑑𝐶𝑜𝑣(𝑋, 𝑌 )) by the product of their distance standard deviations, and it is given by 𝑑𝐶𝑜𝑣(𝑋, 𝑌 ) 𝑑𝐶𝑜𝑟 (𝑋, 𝑌 ) = √︁ (9.4) 𝑑𝑉 𝑎𝑟 (𝑋)𝑑𝑉 𝑎𝑟 (𝑌 ) 9.9.4 Data Analysis of CT Data 9.9.4.1 Statistical Analysis of X-Ray Data In order to have a more thorough analysis of the results, statistical properties of the averaged 2D data of the region around the joint on the pipe sections were studied. The first study was performed by averaging all the densities along with the 360° angles and applying a moving average with a length of 2 mm in the z-direction. The results are shown in Figure 9.52.a. The second study applies the same averaging process for all the angles but exchanges the moving average with a moving standard deviation with a length of 2 mm along the z-direction. The results are shown in Figure 9.52.b. The Figure 9.52: Statistical analysis of the entire pipe sections, a) Moving mean, b) Moving standard deviation. results of the moving average show that the good joint (62AB) cannot be clearly differentiated from 177 Figure 9.53: Statistical analysis of the entire pipe sections for the third batch, a) Moving mean, b) Moving standard deviation. other joints when compared with other samples at the highlighted joint region. For the moving standard deviation, these good samples have the highest value at the highlighted joint region, but they are not well separated from the other samples. The results of this study for the third batch are shown in Figure 9.53. The results from the third batch are not conclusive, and this could be related to the fact that there is no 100% good pipe section in the third batch. 9.9.4.2 Geometrical Features of the X-Ray Data In this study, we discuss the geometrical features that were extracted from the x-ray data. For this purpose, three x-ray images with a spatial resolution of 200 micrometers are compared as shown in Figure 9.54. The first image is for a non-fused dogbone, the second image is for a good dogbone, and the third image is for a bad dogbone. From the scanning results, it can be seen that the HAZ is well separated by a boundary from the rest of the pipe materials. The two main features that have been noticed can be summarized as follows. • The width of the HAZ (W): the width of the HAZ is larger in the case of bad samples. • The joint thickness (T): the joint region thickness is larger in the case of the good samples. The same conclusion can be generalized for all Micro CT scanning results. Figure 9.55.a and Figure 9.55.b show the values of T and W for the scanned dogbone samples. For some of the samples, we couldn’t measure the exact size of the HAZ because the boundaries were not clear. A 178 Figure 9.54: X-ray image, a) Non-fused dogbone, b) Good fused dogbone, c) Bad fused dogbone. scatter plot of the two features together is shown in Figure 9.56. Figure 9.56.a shows a scatter plot with the original predicted labels. The results show two well-separated clusters for good and bad samples. Figure 9.56.b shows a scatter plot with the actual testing label provided by GTI. (a) (b) Figure 9.55: a) Width and height of HAZ of the good samples, b) Width and height of HAZ of the bad samples. The plot shows the original two clusters shown with the predicted labels but with some errors in predicting some of the good joints. This plot indicates this type of classification could properly detect the existence of bad joints, but it could also have some false alarms. One of the other cases 179 Figure 9.56: Scatter plot of the width and thickness of the HAZ with initial labels, a) With predicted labels, b) With actual labels. that this classification does not account for is when an excessive amount of pressure is applied during the joining process. The high mount of pressure pushes all the heated molten out of the joint region and results in a weak fusion. 9.9.4.3 Direct Comparison with Ground Truth In this section, we analyze the CT scans of the pipe sections with the ground truth testing data from GTI. Multiple features have been studied and that includes the average of the density at the heat affected zone and width of the beads. The analysis starts by enhancing the contrast of the HAZ in the CT image by using the segmented beads mask mentioned earlier. The contrast was enhanced by dividing the image by a factor of 4 and then overlaying the HAZ on top of it by using the mask mentioned earlier. Statistical analysis of the second batch is shown in Figure 9.57. The overlaid CT images are shown Figure 9.57.a. Direct correlation of the ground truth data and average value of the density in the HAZ is shown in Figure 9.57.b. The titles of these plots indicate the values of the Pearson correlation (P) and distance correlation (D) between the ground truth and the average value of the values in the heat affected zone. In order to mitigate the possibility of any alignment errors during the scanning process, the data was shifted and sometimes flipped in order to achieve maximum correlation between the data and the ground truth, as shown in Figure 9.57.c. The values on the top of the plot indicate the values of the Pearson correlation (P), distance correlation (D), 180 the amount shift is indicated by Sh in degrees, and F is a flag indicating if we have performed the flipping operation or not. Similar statistical analyses for the third batch are shown in Figure 9.58. Figure 9.59.a and Figure 9.60.b show a comparison of the ground truth with the width of the beads for each pipe. Figure 9.59 shows the correlation results with the original data while Figure 9.60.b shows the correlation results with shifted data. The results show that the external geometrical features of the beads do not reflect the actual status of the joint. This can mainly be seen in sample 56AB and where we have an ideal joint with narrow beads and a very shallow notch while having a weak dogbone. On the other hand, sample 62AB has wide beads and no weak fusion region. 181 Figure 9.57: Statistical analysis of CT results for the second batch, a) Overlayed 2D scan, b) 1D comparison of the ground truth with the avergae values of the HAZ, c) Correlation results with using shifted data. 182 Figure 9.58: Statistical analysis of CT results for the third batch, a) Overlayed 2D scan, b) 1D comparison of the ground truth with the avergae values of the HAZ, c) Correlation results with using shifted data. 183 Figure 9.59: Statistical analysis to check the effect of the beads width on the joint quality for the second batch, a) 1D comparison of the ground truth with the the width of the beads, b) Correlation results with using shifted data. 184 Figure 9.60: Statistical analysis to check the effect of the beads width on the joint quality for the second batch, a) 1D comparison of the ground truth with the the width of the beads, b) Correlation results with using shifted data. 185 Figure 9.61: Statistical analysis to check the effect of the beads width on the joint quality for the third batch, a) 1D comparison of the ground truth with the the width of the beads, b) Correlation results with using shifted data. 9.9.5 OTS In this section, we discuss the optical transmission scanning method scanning results. Direct plotting of the average value at the middle of each dogbone is shown in Figure 9.62.a. We notice a 186 large difference in the mean value between the samples, and this is related to the slight differences in the sample thickness and some difference in the dogbones geometry. To compensate for that, the mean of the region around the heat affected zone was subtracted from each scanning line, and the results are shown in Figure 9.62.b. The results don’t show a noticeable difference between the good and bad samples at the HAZ location. Since the samples have some variation in the thickness and geometry, we decided to focus more on the changes within each sample. Figure 9.63.a shows the results of convolving a smoothed gradient kernel ([1 1 1 -1 -1 -1]) with the data to detect the existence of sudden changes in the HAZ. Smoothing was applied to reduce the effect of noise. Figure 9.63.b shows the results of using a moving standard deviation window with a 2mm width. In both of the figures, there is no clear feature that differentiates between the good and bad samples was found. Figure 9.62: OTS data analysis, a) Direct plotting of the OTS results, b) OTS results with corrected mean value. 9.9.6 Data Analysis of the Electromagnetic Data In this section, we discuss and analyze the scanning data from the electromagnetic sensors and compare them with the ground truth data provided from destructive testing by GTI. 187 Figure 9.63: OTS data analysis, a) Smoothed gradient, b) 2mm moving standard deviation. 9.9.6.1 Coaxial Cable Data of the Second Batch The contrast was enhanced by using the width of the beads as a mask and dividing the rest of the image by a factor of four (4). The results of overlaying the CT mask on the EM data can be seen in Figure 9.64.a. A 1D comparison of the interpolated ground truth values and the average values of the scanning values in the HAZ is shown in Figure 9.64.b. The values on the top of the plot indicate the values of the Pearson correlation (P) and distance correlation (D) between the ground truth and the average value of the sensor readings in the heat affected zone. The correlation values are indicated by P and D consecutively on the top of the plots. In order to mitigate the possibility of any alignment errors during the scanning process, the data were shifted and sometimes flipped in order to get the maximum correlation between the data and the ground truth, as shown in Figure 9.64.c. The values on the top of the plot indicate the values of the Pearson correlation (P), distance correlation (D), the amount shift is indicated by Sh in degrees, and F is a flag indicating if we have performed the flipping operation or not. The correlation results for the second batch with the existence of beads are shown in Figure 9.64. The correlation results for the second batch with the removed beads are shown in Figure 9.65. The correlation results for the third batch with the existence of the beads are shown in Figure 9.66.a and Figure 9.66.b. When comparing sample 56AB, notice the existence of a peak in the sensor readings associated 188 with the existence of the defect at angle 225. Thirty-nine degrees shift leads to a maximum correlation between the sensor readings and ground truth. For sample 58AB, we notice that the entire sensor readings have fallen below -0.4mv, and we also notice that the HAZ zone is wider than that of 56AB. For sample 60AB, we notice that the fall in the sensor readings is associated with a fall in the testing energy. For sample 62AB, we notice that the sensor reading is higher than -0.3 for all the scanned angles. These results indicate that the existence of the weak fusions is associated with a decrease in the sensor readings with an exception for 56AB. The correlation results for the second batch with the removed beads are shown in Figure 9.65. In a similar manner to the scanning with the beads, we notice that the existence of the defect in 56AB is associated with an increase in the sensor reading with a small shift of 32.5 degrees. Sample 58AB scanning results show no obvious features of the weak fusions. When checking sample 60AB, maximum correlation what achieved by flipping the signal and applying a small shift of -12.5 degrees (347.5). This may be related to flipping the pipe during the scanning process. Sample 62AB scanning results show a weak correlation between the sensor readings and the actual ground truth. The correlation results for the third batch with the existence of the beads are shown in Figure 9.66 and Figure 9.67. Sample 57AB shows that the existence of the defected region is associated with the increase in the sensor readings. For sample 57CD, the direct correlation is very weak, while maximum correlation can be achieved by flipping the sensor data and applying a shift of 31.5 degrees. The correlation results of the rest of the pipes are weak, and no conclusive criteria can be used to identify the pipes by using the scanning data. 189 Figure 9.64: Statistical analysis coaxial cable results for the second batch with beads, a) Overlayed 2D scan, b) Correlation results with using original data, c) Correlation results with using shifted data. 190 Figure 9.65: Statistical analysis coaxial cable results for the second batch with removed beads, a) Overlayed 2D scan, b) Correlation results with using original data, c) Correlation results with using shifted data. 191 Figure 9.66: Statistical analysis coaxial cable results for the third batch, a) Overlayed 2D scan, b) Correlation results with using original data, c) Correlation results with using shifted data. 192 Figure 9.67: Statistical analysis coaxial cable results for the third batch, a) Overlayed 2D scan, b) Correlation results with using original data, c) Correlation results with using shifted data. 193 9.9.6.2 Waveguide Data of the Second Batch This section shows the analysis of the waveguide data. Similar analysis tools and schemes to the coaxial data were used to analyze the open-ended waveguide data. The overlaid images and the correlation results for scanning the pipes with intact beads are shown in Figure 9.68. When examining sample 56AB, it can be noticed that the drop in the testing energy is associated with the drop in the sensor readings. The scanning results of the sample 58AB show no correlation to the actual testing data. The scanning results of sample 60AB show a drop in the sensor readings associated with the drop in the actual testing data but with -48.5 degrees. When we compare the scanning results of sample 62AB, we notice a noticeable drop in the sensor reading that differentiates them from the rest of the scanned pipes. The data analysis of the scanning results of the third batch of pipes with intact beads is shown in Figure 9.70 and Figure 9.70. By examining sample 57AB, we notice that the region of the bad dogbones is associated with an increase in the sensor reading. The same conclusion can be drawn for sample 57CD, where we notice an increase in the sensor readings in the region of bad samples. The scanning results of the rest of the batch show a poor correlation between the scanning data and the actual testing results. 194 Figure 9.68: Statistical analysis open-ended waveguide results for the second batch with beads, a) Overlayed 2D scan, b) Correlation results with using original data, c) Correlation results with using shifted data. 195 Figure 9.69: Statistical analysis open-ended waveguide results for the second batch with no beads, a) Overlayed 2D scan, b) Correlation results with using original data, c) Correlation results with using shifted data. 196 Figure 9.70: Statistical analysis open-ended waveguide results for the third batch, a) Overlayed 2D scan, b) Correlation results with using original data, c) Correlation results with using shifted data. 197 Figure 9.71: Statistical analysis open-ended waveguide results for the third batch, a) Overlayed 2D scan, b) Correlation results with using original data, c) Correlation results with using shifted data. 198 9.9.6.3 Dual Channel Open-Ended Waveguide In this section, we discuss the scanning results of using a dual-channel open-ended waveguide. The overlaid scanning results and the statistical correlation results are shown in Figures 9.72, 9.73, 9.74, and 9.75. Figure 9.72: Statistical analysis of the amplitude of the dual channel waveguide measurement results for the third batch, a) Overlayed 2D scan, b) Correlation results with using original data, c) Correlation results with using shifted data. For sample 57AB, the existence of the defects at angles 180 and 270 are associated with a 199 decrease in the values for the sensor readings for both phase and magnitude. For the rest of the scan, the is no clear relationship between the scanning results and the ground truth. Figure 9.73: Statistical analysis of the amplitude of the dual channel waveguide measurement results for the third batch, a) Overlayed 2D scan, b) Correlation results with using original data, c) Correlation results with using shifted data. 200 Figure 9.74: Statistical analysis of the phase of the dual channel waveguide measurement results for the third batch, a) Overlayed 2D scan, b) Correlation results with using original data, b) Correlation results with using shifted data. 201 Figure 9.75: Statistical analysis of the phase of the dual channel waveguide measurement results for the third batch, a) Overlayed 2D scan, b) Correlation results with using original data, b) Correlation results with using shifted data. 9.9.6.4 Split Ring Resonator In this section, we discuss the scanning results of using a split-ring resonator sensor. The overlaid scanning results and the statistical correlation results are shown in Figure 9.76 and Figure 9.77. The overlaid images and the correlation results indicate poor resolution between the scanning data and 202 the ground truth data. These results may indicate the high sensitivity of the sensor to the surface deformations rather than the subsurface weak fusion defects. Figure 9.76: Statisstical analysis of SRR results for the third batch, a) Overlayed 2D scan, b) Correlation results with using original data, c) Correlation results with using shifted data. 203 Figure 9.77: Statisstical analysis of SRR results for the third batch, a) Overlayed 2D scan, b) Correlation results with using original data, c) Correlation results with using shifted data. 204 9.9.7 Summary • The observations from analyzing the second batch are as follows: – For the coaxial cable, the existence of a defect led to a decrease in the sensor readings, and this can be clearly seen in samples 58AB and 62AB. Shifting the data improves the correlation results, and this can be related to experimental errors like pipe flipping and errors in defining the zero angle before starting the scanning process. Large values (>40 degrees) of shifting were ignored because they have no physical meaning. When the beads were removed, the performance of the sensor deteriorated and may indicate a relation between the physical bead size and shape and the status of the joint. – For the open-ended waveguide existence of weak fusion leads to a decrease in the amplitude of the sensor readings, and this can be specially noticed in samples 56Ab and 60Ab. The scanning results of the pipes without beads show a poor correlation between the scanning data and the actual testing results, which may again indicate a relation between the bead geometry and weak fusion. • For the second batch the oservations can be summarized as follows: – Coaxial cable: All the samples showed a weak correlation with the ground truth with an exception for sample 57AB, where the sensor readings show a negative correlation with ground truth data. – Open-ended WG: samples 57AB and 57CD show a negative correlation with the sensor data where the existence of weak fusion cases higher sensor readings. The rest of the scanning results showed a weak correlation with the ground truth. – Dual-channel WG: most of the scanning results are positively correlated with the ground truth with either amplitude or phase or sometimes both, – Split ring resonator: The correlation and direct comparison results showed that the SRR scans are weakly correlated to the ground truth data. 205 9.10 Conclusion In this chapter, we presented an investigative study to test the capability of current NDE techniques to identify the existence of weak fusions in butt fusion joints. Multiple samples with different joint qualities have been created under a controlled environment to serve as a reference during the sensors’ development and evaluation. Multiple NDE methods were tested to find the best candidate to detect weak fusions. The NDE method can be divided as follows: • X-ray CT: This method was chosen due to its high spatial resolution (0.144 micrometer) and its sensitivity to the small changes in material density. This method was used to provide 3D maps of the material density for both the pipe sections and dog bones. Direct observation of the heat-affected zone indicates that CT results were able to detect edges of the HAZ and joint region. Another observation was noticing the existence of a boundary region with higher density in the middle of the bound region. The Micro CT can clearly identify the heat affected zone for both pipe sections and dogbone samples. Tests also showed that bad joints could be identified by comparing the width of the heat-affected zone and the thickness of the joint. The good joints have a smaller HAZ region and a thicker joint. This can be related to the lower amount of pressure that was applied during the fabrication of the bad joints. This study was applied to the dogbone samples only; therefore, further investigations may be required. • Microwave frequency probes: Microwave frequency probes are highly sensitive to the change in the material dielectric properties, which is believed to be one of the factors that may differentiate between the strong and weak fusions. These sensors are near field microwave probes where the microwave measurement system monitors the EM waves reflections from the sensor tip. These probes are robust and easy to use and can be easily integrated for field deployment. A microwave scanning system was developed with a rotary platform for pipeline scanning with the ability to handle different types of probes. Three EM-based methods were used to test the possibility of sensing dielectric changes due to the existence 206 of weak fusions. The EM sensing methods are Coaxial cables, open-ended waveguides, and split-ring resonators. – Coaxial scanning probe: The probe is a coaxial cable with a short tip that extends from its end in a similar configuration to a monopole antenna. The probe is highly sensitive to the material in its vicinity and has submillimeter spatial resolution due to its thin sensing tip. – Open-ended waveguide: The probe is a WG16 rectangular waveguide with an open end. The sensor has a wide aperture; therefore, the sensing results represent an integration of area in the vicinity of the sensor. – SRR: The probe is a metamaterial inspired sensor existence of difference in the dielectric constant lead to changing the resonance frequency of the rings. This sensor was used later during the scanning process. All the electromagnetic methods were affected by the height of the beads and the ovality of the pipe, which were causing different liftoff distances from the scanning probe. A direct approach is developed to post-process the data and compensate for the effect of the liftoff distance and was applied to the scanning results from all microwave sensors. After applying the postprocessing, all the EM methods were able to identify the existence of the bead region with different amounts of spatial details. Coaxial cable scans showed highly detailed images where it is able to sense the actual geometry of the beads. The drawback is that this sensitivity deteriorates with the increase of the liftoff distance, where the details of the beads are lost. The open-ended waveguide sensor was not able to extract the fine details of the beads, but it was able to locate the existence of the bead region. This poor spatial resolution is related to the large aperture of the open-ended waveguide. The SRR sensor showed comparable spatial resolution to the open-ended waveguide due to the relatively large size of the resonance rings. • Capacitive sensor: The sensor is a two-plate capacitor that is coupled to a coil to form a resonator circuit. The sensor was not sensitive to the existence of the beads; therefore, further 207 scans were canceled. • Optical transmission scanning: This method depends on measuring the attenuation of a laser beam passing through a joint and therefore requires two side access for the tested samples. Therefore, this method was mainly used for dogbones testing. Since the attenuation of the laser beam is dependent on the integration of the attenuation along the optical path, therefore, we were getting thickness-dependent laser attenuation. Initial results were not conclusive, and there were no direct features that can be related to the existence of weak fusions. To analyze the data from these sensors, the following approaches were followed: • Direct comparison: in this approach, we directly compared the scanning results with the ground truth. • Statistical analysis: in this approach, both Pearson and distance correlations coefficients were used to test for the existence of linear and nonlinear relations between the data and ground truth. The analysis of the scanning results from every single sensor gave mixed results when compared to the ground truth. Some methods had good correlation results for some of the pipes while having poor correlation for the others, and each method showed different responses for different types of features. 208 CHAPTER 10 DETECTION OF LEGACY CROSSBORES WITH CAPACITIVE SENSORS 10.1 Introduction A crossbore in a gas pipeline happens when a gas pipeline passes through a sewer pipe during the drilling process, as shown in 10.1.a. In this chapter, we aim to develop a miniaturized nondestructive evaluation (NDE) tool that can detect and locate legacy cross bores for underground polyethylene gas pipelines. The sensor is designed to be sent through the gas pipeline, classify the surrounding materials around the pipe, and differentiate them from actual cross bores as shown in Figure 10.1.b. (a) (b) Figure 10.1: a) Gas pipeline intersection with a sewer pipe (cross bore) [9], b) proposed cross bore detection procedure launched inside the gas pipelines. 10.2 Sensor Design Materials in the sewer pipes (air or water) differ in the dielectric properties of the soil. Thermo- plastic like polyethylene has a small dielectric constant of 2.25 [151]; therefore, the electromagnetic waves can penetrate through pipe walls and detect the materials outside the pipe. Cross-bores can 209 be easily detected by EM sensors because these sensors have high sensitivity to the change in dielectric properties of the materials in their vicinity. 10.2.1 Sensing System Design and Optimization The proposed system adopts both electromagnetic and optical sensors to have a highly sensitive system with a minimal number of false alarms. A general schematic of the system is shown in Figure 10.2. The system is designed to have a cylindrical structure with a diameter that can be adapted to different pipe sizes. The core of the system is an EM sensor that can penetrate through the pipe walls and monitor the change in dielectric properties of the materials around the pipe. This capacitive sensor will detect the change in the dielectric properties of the pipe as well as the Figure 10.2: Sensor schematic integrating capacitive and optical sensors. materials in the cross-bore contact junction. To reduce false alarms of detection, an LED-assisted camera will be attached to the front of the proposed system. The optical system will give the operator a better idea about the environment inside the pipe during the testing process and provide information about potential defects inside the pipe. Note that the optical camera is not necessary for accurate cross-bores detection and classification using the proposed EM method. 210 10.2.2 Multiphysics Numerical Modeling A numerical model was developed using COMSOL Multiphysics to simulate the EM detection of cross-bores inside the MDPE pipe. The model simulates pipe that is buried in soil, i.e., the surrounding medium, with the proposed EM sensor moving along the pipe at a constant velocity. An air or water pocket introduced in the surrounding medium simulates an intersection with an empty sewer pipe. The second medium can also be set in the simulation model as other material types in the cross-bores region, such as air, water, etc. A preliminary simulation study using an air pocket with the size of 3" in the longitudinal direction in the surrounding medium with an empty sewer pipe has been completed. The EM sensor is operating at a frequency of 6.5 MHz. The electric field E distribution due to the sensor excitation at two different positions (w/ and w/o cross-bores) is shown in Figure 10.3. Position 1 shows the distribution when the sensor is surrounded by soil, while position 2 shows the distribution when the pipe is surrounded by air. The results show that the electric field decays quickly in the presence of soil while it extends to the larger area around the pipe in the presence of air. This fast decay is related to the higher dielectric constant of the soil (𝜀𝑟 = 20) when compared to the air (𝜀𝑟 = 1). For the sensor capacitance measurements, the existence of soil will result in higher sensor readings due to the higher dielectric constant. Figure 10.3: Simulation results of the E field distribution generated by the proposed multi-channel EM sensor moving inside the MDPE pipe. 211 10.3 Experimental Results An initial prototype was designed to fit the 3" diameter pipe with a 2.4" diameter sensor size according to the design concept presented in section 10.1. The prototype was tested initially in laboratory conditions with a relatively small testing setup shown in Figure 10.4. The test setup consists of a 3" diameter PVC pipe partially buried in a tank filled with sand and partially in air. The test was performed by pushing the sensor into the test setup to scan the air section first and then scan the soil section. After that, the sensor is pulled back to scan the pipe in the reverse direction. The sensor readings are shown in Figure 10.5, where the stable readings of the air section are observed, followed by the transient readings for the air/sand interface. The sensor readings gradually increase until they settle at a higher steady level for the soil/sand. During the pulling stage, a reversed pattern was observed where the sensor readings decreased gradually until they settled again in the air section. The signal is not symmetric due to the difference in the scanning speeds for the pushing and pulling stages. This means that further signal processing can reveal the information of DAQ, such as scanning velocity, from the unique features of the acquired signals in the time domain. All the data can be acquired, stored, and detection can be realized in real-time. (a) (b) Figure 10.4: a) Preliminary test setup (side view), b) EM sensor inside the preliminary test setup (Top view). In addition to the internal laboratory tests, the tool was demonstrated at a special testing facility designed by the gas technology institute (GTI) to demonstrate the capability of cross bore detection tools. The device used in this test used a single channel sensor that covers 120 degrees. A schematic of the GTI testing setup is shown in Figure 10.6. It consists of an 8-inch PVC pipe (D) and three 212 Figure 10.5: Detection results of clear differentiation in soil and air using the proposed EM sensing technology inside the plastic pipe. Figure 10.6: GTI testing setup. polyethylene 4-inch pipes (A, B, and C). The 8-inch PVC pipe simulates a buried sewer pipe that can be filled with either air or water. The three polyethylene pipes simulate three gas pipelines that are passing at different distances from the sewer pipe. Below is a description of each gas pipe orientation and the testing results provided by the tool when testing each one of these pipes. • Pipe A represents a gas pipeline passing through the center of a sewer pipe: The sensor was able to detect and localize the existence of the cross bore with the use of either water or air as the filling of the sewer pipe. • Pipe B: a gas pipeline passing partially through a sewer pipe: The sensor was able to detect the cross bore with the use of both water and air as the filling of the sewer pipe. In this case, 213 we also demonstrated the benefits of the directionality sensing capability. – The sensor showed no changes in its capacitive readings when it was directed upward (Only soil, and there is no cross-bore) – The sensor showed a significant change in its capacitive readings when the sensor was directed downward (Cross bore: Air or water inside the sewer pipe). • Pipe C: a gas pipeline passing at a distance away from a sewer pipe: The sensor capacitance readings were not affected by the existence of the sewer pipe with both water and air filling. This result shows that the sensor has good resistance to false alarms from neighboring sewer pipes, and the sensing range is limited to the immediate area in its vicinity. 10.4 Conclusion In this chapter, we introduced a miniaturized tool that can be inserted inside distribution gas pipelines to detect legacy cross bores. The tool is a capacitor sensor that measures the changes in the dielectric properties in the medium around the tested pipe; therefore, it can differentiate between soil and a cross-bore that is filled with water or air. Simulated field tests showed the ability of the sensor to successfully satisfy the design goals and detect cross-bores with water or air filling. The tests also showed minimum sensor readings fluctuations when the sensor was in movement inside the test pipe. The tool presents a low-cost solution that can be mounted on a push rod and deployed inside new installed or legacy pipe to ensure the existence of no cross bores. 214 CHAPTER 11 CONCLUSION AND FUTURE WORK 11.1 Summary This research presents a set of NDE tools with potential application in pipeline inspection and biomedical applications. Key contributions of this research are described as follows: • Design and miniaturization of sequential RGB-D structured sensor for inspection of small diameter pipelines. The sensor employs multicolor multiring SL sensing with sequential RGB-D acquisition to produce concurrent endoscopic RGB images with 3D model of the inspected surface. • Design of a side-by-side SL with 360-degree unobstructed inspection capability by adding geometrical constraints from a camera with an orientation perpendicular to the orientation of the first camera. The added constraints guarantee that at least one non tangent ray to the cone exists for any inspected angles. • Design of a phase measurement profilometry sensor with the capability to exploit the sensor movement to enhance the 3D reconstructed profile. The sensor employs a stereoscopic setup to compensate for the height-dependent shifting and reduce the artifacts in the reconstructed profile. • Development of an algorithm to automatically calibrate the projector of the SL sensor when placed inside a cylindrical pipe with a known diameter. With known camera parameters, the algorithm minimizes the error in the closed path of camera-cone-cylinder-camera to estimate the parameters of the projected cone by the projector. 215 • Development of an automatic stabilization algorithm to register the consecutive frames from the SL sensor. Eliposoid fitting is used to obtain real-time pose estimation of the sensor inside the pipe and corrects the orientation of the generated data. • Improve the signal to noise ratio in pulsed thermoacoustic imaging systems: coded pulse sequence is used to excite the imaged sample, and then the received pressure signal is correlated with a template that is related to the power profile of the excitation pulse. The proposed approach does not require the use of linear amplifiers; therefore, it can be easily applied to pulsed TAI systems without major modification. • Performing an investigative study to test the probability of detecting cold fusion in butt fusion joints by using emerging NDE techniques. Experimental work for this investigative study was conducted by using X-ray CT, near field MW inspection, and optical transmission scanning. Best results were achieved with X-ray CT, which clearly identified the heat affected zone. Bad joints were identified by measuring the width of the heat-affected zone and the thickness of the joint. • Design of a coplanar capacitive sensing tool for the detection of legacy crossbores in gas pipelines. The tool can detect crossbores by monitoring the change in the dielectric properties of the pipe surrounding. 11.2 Future work This section gives an insight about the possible directions and ideas to continue this research • Optical inspection: – Development of a registration procedure to handle special inspection cases like elbow joints. – Development of a stabilization procedure for the MPMP method to handle inspection rigs with nonuniform motion. 216 • Thermoacoustic imaging: – Apply thermouacoustic imaging for NDE applications like the inspection of glass fiber composites. – Investigate single sided inspection with air coupling. • Capacitive sensing: – Increase sensor sensitivity and directionality by implementing an active shielding to the sensor. – Integrate the sensor with the RGB-D sensing system to provide a comprehensive in- spection tool for legacy plastic pipelines. 217 APPENDIX 218 APPENDIX CAMERA INTERSECTION WITH 3D SURFACES Below are the mathematical derivations for the camera ray intersections with the cones and cylinders. A.1 Ray-Cylinder Intersection The intersection point (𝑃) is assumed to belong to an infinite cylindrical surface; therefore, it must satisfy  2 (𝑷 − 𝑶) × 𝑨Cyl 2 = 0, − 𝑅Cyl (.1) 𝑂 is a point that belongs to the cylinder main axis, 𝑨Cyl is a unit vector representing the direction of the cylinder main axis, and r is the radius of the cylindrical surface. The camera ray can be described by its origin point (𝐶), a scalar (𝑠), a direction unit vector (𝒘), and it is given by 𝑪 + 𝑠𝒘. (.2) The ray intersects with the cylinder at point (𝑷 = 𝑪 + 𝑠𝒘 ) with a specific value of s; therefore, the main goal here is to find a value of s that satisfies the cylinder equation. By combining Eq. .1 and Eq. .2, we get  2 (𝑪 + 𝑠𝒘 − 𝑶) × 𝑨Cyl − 𝑟 2 = 0, (.3)    2 (𝑪 − 𝑶) × 𝑨Cyl + 𝑠 𝒘 × 𝑨Cyl − 𝑟 2 = 0. (.4) The equation has the form (𝑋 + 𝑠𝑌 ) 2 − 𝑟 2 = 0, (.5) 𝑠2 (𝑌 · 𝑌 ) + 2𝑠 (𝑋 · 𝑌 ) + (𝑋 · 𝑋) − 𝑟 2 = 0. (.6) The equation has the form 𝑎 + 𝑏𝑠 + 𝑐 = 0, (.7) 𝑎 = (𝑌 · 𝑌 ) , 𝑏 = 2 (𝑋 · 𝑌 ) , 𝑐 = (𝑋 · 𝑋) − 𝑟 2 . (.8) 219 The value of 𝑠 can be calculated as follows: √ −𝑏 ∓ 𝑏 2 − 4𝑎𝑐 𝑠= . (.9) 2𝑎 The calculated value of 𝑠 is substituted in Eq. .2 to calculate the position of the point of intersection. Only the positive value of 𝑠 is chosen since the camera is looking in the forward direction. A.2 Ray-Cone intersection The position vector of the intersection point is described by 𝑷 = 𝑪 + 𝑠𝒘. (.10) The projected cone is described by 𝑷−𝑽 · 𝑨 = cos 𝜃. (.11) |𝑷 − 𝑽 | By combining Eq. .10 and Eq. .11, we get: ((𝑪 + 𝑠𝒘 − 𝑽) · 𝑨) 2 − (𝑪 + 𝑠𝒘 − 𝑽) · (𝑪 + 𝑠𝒘 − 𝑽) cos2 𝜃 = 0, (.12) ((𝑽𝑪 + 𝑠𝒘) · 𝑨) 2 − (𝑽𝑪 + 𝑠𝒘) · (𝑽𝑪 + 𝑠𝒘) cos2 𝜃 = 0, (.13)   (𝑽𝑪 · 𝑨 + 𝑠𝒘 · 𝑨) 2 − 𝑽𝑪 · 𝑽𝑪 + 2𝑠𝒘 · 𝑽𝑪 + 𝑠2 cos2 𝜃 = 0, (.14) 𝑠2 (𝒘 · 𝑨) 2 + 2𝑡 (𝒘 · 𝑨) (𝑽𝑪 · 𝒘) + (𝑽𝑪 · 𝒘) 2 −   𝑽𝑪 · 𝑽𝑪 + 2𝑠𝒘 · 𝑽𝑪 + 𝑠2 cos2 𝜃 = 0, (.15)     𝑠2 (𝒘 · 𝑨) 2 − cos2 𝜃 + 2𝑠 (𝒘 · 𝑨) (𝑽𝑪 · 𝑨) − 𝒘 · 𝑽𝑪 cos2 𝜃 + (𝑽𝑪 · 𝑨) 2 − (𝑽𝑪 · 𝑽𝑪) cos2 𝜃 = 0. (.16) And the required parameters to solve the equation are given by 𝑎 = (𝒘 · 𝑨) 2 − cos2 𝜃, (.17) 220   𝑏 = 2 (𝒘 · 𝑨) (𝑽𝑪 · 𝑨) − 𝒘 · 𝑽𝑪 cos2 𝜃 , (.18) 𝑐 = (𝑽𝑪 · 𝑨) 2 − (𝑽𝑪 · 𝑽𝑪) cos2 𝜃. (.19) The value of 𝑠 and 𝑃 are calculated with Eq. .9 and Eq. .2 respectively. 221 BIBLIOGRAPHY 222 BIBLIOGRAPHY [1] Alireza Mashal, John H Booske, and Susan C Hagness. Toward contrast-enhanced microwave-induced thermoacoustic imaging of breast cancer: An experimental study of the effects of microbubbles on simple thermoacoustic targets. Physics in Medicine & Biol- ogy, 2009. [2] Malcolm Spicer, Mike Troughton, and Fredrik Hagglund. Development and assessment of ultrasonic inspection system for polyethylene pipes. In Pressure Vessels and Piping Conference, 2013. [3] Olympus IMS. Ultrasonic time-of-flight-diffraction (tofd) examination of butt-fusion joints of high-density polyethylene (hdpe). https://www.olympus-ims.com/en/applications/ ultrasonic-tofd-butt-fusion/. Accessed: 2022-05-09. [4] G Giller, L Yu Mogilner, and V Khomenko. Technologies and hardware of ultrasonic testing of welded joints of steel and polyethylene pipelines. In 15th World Conference on Nondestructive Testing, 2000. [5] Robert Stakenborghs and Jack Little. Microwave based nde inspection of hdpe pipe welds. In International Conference on Nuclear Engineering, 2009. [6] Kichinosuke Yahagi. Dielectric properties and morphology in polyethylene. IEEE Transac- tions on Electrical Insulation, 1980. [7] Quantum gx microct imaging system. http://www.perkinelmer.com/product/ quantum-gx-microct-system-cls140083. Accessed: 2019-03-01. [8] Constantine A Balanis. Antenna theory: analysis and design. John wiley & sons, 2015. [9] Cross bores. https://ams.pa1call.org/pa811/Public/Resource%20Center/ Cross_Bores/Public/POCS_Content/Resource_Center/Cross_Bore.aspx?hkey= 806ac1f6-8311-4d17-a03f-e23f973de164. Accessed: 2022-05-09. [10] MJ Lovejoy. Magnetic particle inspection: a practical guide. Springer Science & Business Media, 2012. [11] Yong-Kai Zhu, Gui-Yun Tian, Rong-Sheng Lu, and Hong Zhang. A review of optical ndt technologies. Sensors, 2011. [12] T Warren Liao and Yueming Li. An automated radiographic ndt system for weld inspection: Part ii—flaw detection. NDT & E International, 1998. [13] Sergey Kharkovsky and Reza Zoughi. Microwave and millimeter wave nondestructive test- ing and evaluation-overview and recent advances. IEEE Instrumentation & Measurement Magazine, 2007. 223 [14] Javier García-Martín, Jaime Gómez-Gil, and Ernesto Vázquez-Sánchez. Non-destructive techniques based on eddy current testing. Sensors, 2011. [15] Yan Shi, Chao Zhang, Rui Li, Maolin Cai, and Guanwei Jia. Theory and application of magnetic flux leakage pipeline detection. Sensors, 2015. [16] Josef Krautkrämer and Herbert Krautkrämer. Ultrasonic testing of materials. Springer Science & Business Media, 2013. [17] Natural gas explained-Natural gas pipelines | EIA. https://www.eia.gov/energyexplained/ natural-gas/natural-gas-pipelines.php. Accessed: 2020-12-19. [18] Pipeline materials | PHMSA. https://www.phmsa.dot.gov/technical-resources/pipeline/ pipeline-materials. Accessed: 2020-12-09. [19] Julie Maupin and Michael Mamoun. Plastic pipe failure, risk, and threat analysis. Technical report, DOT PHMSA, 2009. [20] The nature of polyethylene pipe failure | PlasticsToday. https://www.plasticstoday.com/ extrusion-pipe-profile/nature-polyethylene-pipe-failure/6347137615310. Accessed: 2020- 03-20. [21] Norman Brown. Intrinsic lifetime of polyethylene pipelines. Polymer Engineering & Science, 2007. [22] S.L. Crawford, S.R. R Doctor, A.D. D Cinson, and M.W. W Watts. Assessment of NDE methods on inspection of HDPE butt fusion piping joints for lack of fusion with validation from mechanical testing. AccessScience, ©McGraw-Hill Companies, 2008. [23] Dominique Gueugnaut, Romuald Bouaffre, and Manuel TESSIER. Acceptance criteria for volume defects in welded assemblies, detected and sized using the phased array ultrasonic technique. In international lastic pipes conference and exhibition, 2018. [24] Hagen Schempf, Edward Mutschler, Alan Gavaert, George Skoptsov, and William Crowley. Visual and nondestructive evaluation inspection of live gas mains using the Explorer™ family of pipe robots. Journal of Field Robotics, 2010. [25] Yuhang Zhang, Richard Hartley, John Mashford, Lei Wang, and Stewart Burn. Pipeline reconstruction from fisheye images. Journal of WSCG, 2011. [26] Peter Hansen, Hatem Alismail, Peter Rander, and Brett Browning. Visual mapping for natural gas pipe inspection. The International Journal of Robotics Research, 2015. [27] Peter Hansen, Hatem Alismail, Brett Browning, and Peter Rander. Stereo visual odometry for pipe mapping. In 2011 IEEE/RSJ International Conference on Intelligent Robots and Systems, 2011. [28] Phat Huynh, Robert Ross, Andrew Martchenko, and John Devlin. Anomaly inspection in sewer pipes using stereo vision. In 2015 IEEE International Conference on Signal and Image Processing Applications (ICSIPA), 2015. 224 [29] Armando Albertazzi G., Jr., Allan C Hofmann, Analucia V Fantin, and João M C Santos. An endoscopic optical system for inner cylindrical measurement using fringe projection. In Applied Optics, 2008. [30] O. Duran, K. Althoefer, and L. D. Seneviratne. Automated pipe defect detection and catego- rization using camera/laser-based profiler and artificial neural network. IEEE Transactions on Automation Science and Engineering, 2007. [31] Amal Gunatilake, Lasitha Piyathilaka, Sarath Kodagoda, Stephen Barclay, and Dammika Vitanage. Real-Time 3D Profiling with RGB-D Mapping in Pipelines Using Stereo Camera Vision and Structured IR Laser Ring. In 2019 14th IEEE Conference on Industrial Electronics and Applications (ICIEA), 2019. [32] Jason Geng. Structured-light 3D surface imaging: a tutorial. IEEE Intelligent Transportation System Society,, 2011. [33] Marco Piccirilli, Gianfranco Doretto, Arun Ross, and Donald Adjeroh. A mobile structured light system for 3d face acquisition. IEEE Sensors Journal, 2016. [34] Manuel Rodriguez-Martin, Pablo Rodriguez-Gonzalvez, Diego Gonzalez-Aguilera, and Je- sus Fernandez-Hernandez. Feasibility study of a structured light system applied to welding inspection based on articulated coordinate measure machine data. IEEE Sensors Journal, 2017. [35] Xue Iuan Wong and Manoranjan Majji. A structured light system for relative navigation applications. IEEE Sensors Journal, 2016. [36] Boxun Fu, Fu Li, Tianjiao Zhang, Jingsong Jiang, Quanlu Li, Qinglong Tao, and Yi Niu. Single-shot colored speckle pattern for high accuracy depth sensing. IEEE Sensors Journal, 2019. [37] Christoph Schmalz, Frank Forster, Anton Schick, Elli Angelopoulou, Frank Forster, Anton Schick, and Elli Angelopoulou. An endoscopic 3D scanner based on structured light. Medical Image Analysis, 2012. [38] S. Ullman. The interpretation of structure from motion. Proceedings of the Royal Society of London. Series B, Containing papers of a Biological character. Royal Society (Great Britain), 1979. [39] Jan J. Koenderink and Andrea J. van Doorn. Affine structure from motion. Journal of the Optical Society of America A, 1991. [40] Johannes L. Schonberger and Jan-Michael Frahm. Structure-from-Motion revisited. In 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016. [41] Robert J. Woodham. Photometric method for determining surface orientation from multiple images. Optical Engineering, 1980. [42] Berthold KP Horn. Obtaining shape from shading information. The psychology of computer vision, 1975. 225 [43] Pietro Zanuttigh, Giulio Marin, Carlo Dal Mutto, Fabio Dominio, Ludovico Minto, and Guido Maria Cortelazzo. Time-of-flight and structured light depth cameras. Technology and Applications, 2016. [44] Georgia Gkioxari, Jitendra Malik, Justin Johnson, and Jitendra Malik. Mesh R-CNN. In Proceedings of the IEEE International conference on Computer Vision, 2019. [45] Joaquim Salvi, Sergio Fernandez, Tomislav Pribanic, and Xavier Llado. A state of the art in structured light patterns for surface profilometry. Pattern Recognition, 2010. [46] N.G. Durdle, J. Thayyoor, and V.J. Raso. An improved structured light technique for surface reconstruction of the human trunk. In IEEE Canadian Conference on Electrical and Computer Engineering, 1998. [47] Li Zhang, Brian Curless, and SM Steven M SM Seitz. Rapid shape acquisition using color structured light and multi-pass dynamic programming. 3D Data Processing, 2002. [48] Chadi Albitar, Pierre Graebling, and Christophe Doignon. Design of a monochromatic pattern for a robust structured light coding. In 2007 IEEE International Conference on Image Processing, 2007. [49] Chengang Lyu, Prabir Koirala, Yuxiang Liu, and Yuqing Chang. Low-frequency vibration monitoring system based on optical image phase method with different fringe patterns. IEEE Sensors Journal, 2020. [50] J. L. Posdamer and M. D. Altschuler. Surface measurement by space-encoded projected beam systems. Computer Graphics and Image Processing, 1982. [51] Peisen S. Huang and Song Zhang. Fast three-step phase-shifting algorithm. Applied Optics, 2006. [52] Idaku Ishii, Kenkichi Yamamoto, Kensuke Doi, and Tokuo Tsuji. High-speed 3D image acquisition using coded structured light projection. In IEEE International Conference on Intelligent Robots and Systems, 2007. [53] Sam Van der Jeught, Joris J J Dirckx, S Van der Jeught, and Joris J J Dirckx. Real-time structured light profilometry: A review. Optics and Lasers in Engineering, 2015. [54] Yajun Wang and Song Zhang. Superfast multifrequency phase-shifting technique with optimal pulse width modulation. Optics Express, 2011. [55] Beiwen Li, Yajun Wang, Junfei Dai, William Lohry, and Song Zhang. Some recent advances on superfast 3D shape measurement with digital binary defocusing techniques. Optics and Lasers in Engineering, 2014. [56] Junfeng Fan, Fengshui Jing, Lei Yang, Long Teng, and Min Tan. A precise initial weld point guiding method of micro-gap weld based on structured light vision sensor. IEEE Sensors Journal, 2019. 226 [57] Yousaf Muhamad Amir and Benny Thörnberg. High Precision Laser Scanning of Metallic Surfaces. International Journal of Optics, 2017. [58] Fuqiang Zhong, Ravi Kumar, and Chenggen Quan. A cost-effective single-shot structured light system for 3d shape measurement. IEEE Sensors Journal, 2019. [59] Lei Yang, En Li, Teng Long, Junfeng Fan, and Zize Liang. A novel 3-D path extraction method for arc welding robot based on stereo structured light sensor. IEEE Sensors Journal, 2019. [60] Mohand Alzuhiri and Yiming Deng. Structured light-based endoscopic scanner for small diameter gas pipelines. In ENDE, 2017. [61] G. Bradski. The OpenCV library. Dr. Dobb’s Journal of Software Tools, 2000. [62] Qingde Li and J.G. Griffiths. Least squares ellipsoid specific fitting. In Geometric Modeling and Processing, 2004. Proceedings, 2004. [63] DA Turner, IJ Anderson, JC Mason, and MG Cox. An algorithm for fitting an ellipsoid to data. National Physical Laboratory, UK, 1999. [64] Peter Axelsson. Processing of laser scanner data—algorithms and applications. ISPRS Journal of Photogrammetry and Remote Sensing, 1999. [65] Mohand Alzuhiri, Khalid Farrag, Ernest Lever, and Yiming Deng. An electronically stabi- lized multi-color multi-ring structured light sensor for gas pipelines internal surface inspec- tion. IEEE Sensors Journal, 2021. [66] Tong Jia, ZhongXuan Zhou, and HaiHong Gao. Depth measurement based on infrared coded structured light. Journal of Sensors, 2014. [67] Vignesh Suresh. Calibration of structured light system using unidirectional fringe patterns. PhD thesis, Iowa State University, 2019. [68] Daniel Scharstein and Richard Szeliski. A taxonomy and evaluation of dense two-frame stereo correspondence algorithms. International journal of computer vision, 2002. [69] Ricardo R Garcia and Avideh Zakhor. Consistent stereo-assisted absolute phase unwrapping methods for structured light systems. IEEE Journal of selected topics in Signal Processing, 2012. [70] Makoto Kimura, Masaaki Mochimaru, and Takeo Kanade. Projector calibration using arbitrary planes and calibrated camera. In 2007 IEEE Conference on Computer Vision and Pattern Recognition, 2007. [71] Gabriel Falcao, Natalia Hurtos, and Joan Massich. Plane-based calibration of a projector- camera system. VIBOT master, 2008. [72] Ivan Martynov, Joni-Kristian Kamarainen, and Lasse Lensu. Projector calibration by “inverse camera calibration”. In Scandinavian Conference on Image Analysis, 2011. 227 [73] Song Zhang and Peisen S Huang. Novel method for structured light system calibration. Optical Engineering, 2006. [74] Zhen Lei, Xiaojun Liu, Wenlong Lu, Zili Lei, Liangzhou Chen, and Liping Zhou. A new calibration method of the projector in structured light measurement technology. Procedia CIRP, 2015. [75] Andrea Albarelli, Emanuele Rodolà, and Andrea Torsello. Robust camera calibration using inaccurate targets. IEEE Trans. Pattern Anal. Mach. Intell., 2009. [76] Lei Huang, Qican Zhang, and Anand Asundi. Flexible camera calibration using not-measured imperfect target. Applied optics, 2013. [77] Wei Sun and Jeremy R Cooperstock. An empirical evaluation of factors influencing cam- era calibration accuracy using three publicly available techniques. Machine Vision and Applications, 2006. [78] Davide Scaramuzza, Agostino Martinelli, and Roland Siegwart. A toolbox for easily cali- brating omnidirectional cameras. In 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems, 2006. [79] Hui-Feng Wang, Yun-Fei Wang, Jia-Jia Zhang, and Jing Cao. Laser stripe center detection under the condition of uneven scattering metal surface for geometric measurement. IEEE Transactions on Instrumentation and Measurement, 2020. [80] Sergio Garrido-Jurado, Rafael Muñoz-Salinas, Francisco José Madrid-Cuevas, and Manuel Jesús Marín-Jiménez. Automatic generation and detection of highly reliable fiducial markers under occlusion. Pattern Recognition, 2014. [81] Philip HS Torr and Andrew Zisserman. Mlesac: A new robust estimator with application to estimating image geometry. Computer vision and image understanding, 2000. [82] PV Unnikrishnan, Blair Thornton, Tamaki Ura, and Yoshiaki Nose. A conical laser light- sectioning method for navigation of autonomous underwater vehicles for internal inspection of pipelines. In OCEANS 2009-EUROPE, 2009. [83] Amal Gunatilake, Lasitha Piyathilaka, Antony Tran, Vinoth Kumar Vishwanathan, Karthick Thiyagarajan, and Sarath Kodagoda. Stereo vision combined with laser profiling for mapping of pipeline internal defects. IEEE Sensors Journal, 2020. [84] Pedro Buschinelli, Tiago Pinto, F Silva, J Santos, and A Albertazzi. Laser triangulation profilometer for inner surface inspection of 100 millimeters (4") nominal diameter. In Journal of Physics: Conference Series, 2015. [85] Takahiko Inari, Kazuo Takashima, Masaru Watanabe, and Junji Fujimoto. Optical inspection system for the inner surface of a pipe using detection of circular images projected by a laser source. Measurement, 1994. 228 [86] Subrata Mukherjee, Renrui Zhang, Mohand Alzuhiri, Varun Rao, Lalita Udpa, and Yiming Deng. Inline pipeline inspection using hybrid deep learning aided endoscopic laser profiling. Journal of Nondestructive Evaluation (Springer), 2021. [87] Wen Wei Zhang and Bao Hua Zhuang. Non-contact laser inspection for the inner wall surface of a pipe. Measurement Science and Technology, 1998. [88] Mohammad-Reza Azani, Azin Hassanpour, and Tomás Torres. Benefits, problems, and solutions of silver nanowire transparent conductive electrodes in indium tin oxide (ito)-free flexible solar cells. Advanced Energy Materials, 2020. [89] O. Duran, K. Althoefer, and L.D. Seneviratne. Automated sewer pipe inspection through image processing. In Proceedings 2002 IEEE International Conference on Robotics and Automation (Cat. No.02CH37292), 2002. [90] O. Duran, K. Althoefer, and L.D. Seneviratne. Pipe inspection using a laser-based transducer and automated analysis techniques. IEEE/ASME Transactions on Mechatronics, 2003. [91] Z. Zhang. A flexible new technique for camera calibration. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2000. [92] Mohand Alzuhiri, Zi Li, Jiaoyang Li, Adithya Rao, and Yiming Deng. Synchronous and concurrent multi-dimensional structured light sensing with efficient automatic calibration. Measurement, 2022. [93] Yingchun Wu, Yiping Cao, Yanshan Xiao, and Xu Zheng. An on-line phase measuring profilometry: Processed modulation using for pixel matching. Optik, 2013. [94] Fuqin Deng, Jianyang Liu, Jiangwen Deng, Kenneth S.M. M M M Fung, and Edmund Y. Lam. A three-dimensional imaging system for surface profilometry of moving objects. 2013 IEEE International Conference on Imaging Systems and Techniques (IST), 2013. [95] Lei Lu, Jiangtao Xi, Yanguang Yu, and Qinghua Guo. New approach to improve the accuracy of 3-D shape measurement of moving object using phase shifting profilometry. Optics Express, 2013. [96] Fuqin Deng, Chang Liu, Wuifung Sze, Jiangwen Deng, Kenneth S.M. M M M Fung, and Edmund Y. Lam. An INSPECT measurement system for moving objects. IEEE Transactions on Instrumentation and Measurement, 2015. [97] Han Yuan, Yi-Ping Cao, Chen Chen, and Ya-Pin Wang. Online phase measuring profilometry for rectilinear moving object by image correction. Optical Engineering, 2015. [98] Cheng Chen, Yi Ping Cao, Li Jun Zhong, and Kuang Peng. An on-line phase measuring profilometry for objects moving with straight-line motion. Optics Communications, 2015. [99] Kuang Peng, Yiping Cao, Yingchun Wu, and Mingteng Lu. A new method using orthogonal two-frequency grating in online 3D measurement. Optics and Laser Technology, 2016. 229 [100] Hong Guo. 3-D shape measurement based on Fourier transform and phase shifting method. PhD thesis, Stony Brook University, 2011. [101] Katherine Creath. Error sources in phase-measuring interferometry. In Intl Symp on Optical Fabrication, Testing, and Surface Evaluation, 2017. [102] Peisen S. Huang. High-speed 3-D shape measurement based on digital fringe projection. Optical Engineering, 2003. [103] Vincent Chan and Anahi Perlas. Basics of ultrasound imaging. In Atlas of ultrasound-guided procedures in interventional pain management. Springer, 2011. [104] Mathias Fink and Mickael Tanter. Multiwave imaging and super resolution. Phys. Today, 2010. [105] Hao Nan and Amin Arbabian. Peak-power-limited frequency-domain microwave-induced thermoacoustic imaging for handheld diagnostic and screening tools. IEEE Transactions on Microwave Theory and Techniques, 2017. [106] T. Bowen. Radiation-induced thermoacoustic soft tissue imaging. In 1981 Ultrasonics Symposium, 2008. [107] T. Bowen, R.L. Nasoni, A.E. Pifer, and G.H. Sembroski. Some experimental results on the thermoacoustic imaging of tissue equivalent phantom materials. In 1981 Ultrasonics Symposium, 2008. [108] Robert A Kruger, Kenyon K Kopecky, Alex M Aisen, Daniel R Reinecke, Gabe A Kruger, and William L Kiser Jr. Thermoacoustic ct with radio waves: A medical imaging paradigm. Radiology, 1999. [109] Minghua Xu, Geng Ku, Xing Jin, Lihong V Wang, Bruno D Fornage, and Kelly K Hunt. Breast cancer imaging by microwave-induced thermoacoustic tomography. In Photons Plus Ultrasound: Imaging and Sensing 2005: The Sixth Conference on Biomedical Thermoa- coustics, Optoacoustics, and Acousto-optics, 2005. [110] GP Chen, WB Yu, ZQ Zhao, ZP Nie, and QH Liu. The prototype of microwave-induced thermo-acoustic tomography imaging by time reversal mirror. Journal of Electromagnetic Waves and Applications, 2008. [111] Habib Ammari, Josselin Garnier, Wenjia Jing, and Loc Hoang Nguyen. Quantitative thermo- acoustic imaging: An exact reconstruction formula. Journal of Differential Equations, 2013. [112] Andrew T Eckhart, Robert T Balmer, William A See, and Sarah K Patch. Ex vivo thermoa- coustic imaging over large fields of view with 108 mhz irradiation. IEEE Transactions on Biomedical Engineering, 2011. [113] Jian Song, Zhiqin Zhao, Jinguo Wang, Xiaozhang Zhu, Jiangniu Wu, Zaiping Nie, and Qing- Huo Liu. Evaluation of contrast enhancement by carbon nanotubes for microwave-induced thermoacoustic tomography. IEEE Transactions on Biomedical Engineering, 2014. 230 [114] Fei Gao, Xiaohua Feng, and Yuanjin Zheng. Advanced photoacoustic and thermoacoustic sensing and imaging beyond pulsed absorption contrast. Journal of Optics, 2016. [115] Y Deng and M Golkowski. Innovative biomagnetic imaging sensors for breast cancer: A model-based study. Journal of Applied Physics, 2012. [116] Tao Qin, Xiong Wang, Huan Meng, Yexian Qin, Bruce Webb, Guobin Wan, Russell S. Witte, and Hao Xin. Microwave-induced thermoacoustic imaging for embedded explosives detection. IEEE Antennas and Propagation Society, AP-S International Symposium (Digest), 2014. [117] J Xia, J Yao, and L V Wang. Photoacoustic tomography: principles and advances. Electro- magn Waves (Camb), 2014. [118] Xiaohua Feng, Fei Gao, and Yuanjin Zheng. Magnetically mediated thermoacoustic imaging toward deeper penetration. Applied Physics Letters, 2013. [119] Kun Wang and Mark A. Anastasio. Photoacoustic and thermoacoustic tomography: Image formation principles. In Handbook of Mathematical Methods in Imaging. Springer, 2015. [120] E Maxwell. Conductivity of metallic surfaces at microwave frequencies. Journal of Applied Physics, 1947. [121] Yuan Xu, Minghua Xu, and Lihong V Wang. Exact frequency-domain reconstruction for thermoacoustic tomography. ii. cylindrical geometry. IEEE transactions on medical imaging, 2002. [122] Yulia Hristova, Peter Kuchment, and Linh Nguyen. Reconstruction and time reversal in ther- moacoustic tomography in acoustically homogeneous and inhomogeneous media. Inverse problems, 2008. [123] G Chen, Z Zhao, Z Nie, and QH Liu. Computational study of time reversal mirror technique for microwave-induced thermo-acoustic tomography. Journal of Electromagnetic Waves and Applications, 2008. [124] Bradley E Treeby and Benjamin T Cox. k-wave: Matlab toolbox for the simulation and reconstruction of photoacoustic wave fields. Journal of biomedical optics, 2010. [125] Peter Kuchment and Leonid Kunyansky. Mathematics of photoacoustic and thermoacoustic tomography. In Handbook of Mathematical Methods in Imaging: Volume 1, Second Edition. Springer, 2015. [126] Ryan Jacobs, Mohand Alzuhiri, Mark Golkowski, and Yiming Deng. Low-power microwave induced thermoacoustic imaging: Experimental study and hybrid fem modeling. PIER, 2018. [127] Cunguang Lou, Sihua Yang, Zhong Ji, Qun Chen, and Da Xing. Ultrashort microwave- induced thermoacoustic imaging: A breakthrough in excitation efficiency and spatial reso- lution. Physical Review Letters, 2012. 231 [128] Daniel Razansky, Stephan Kellnberger, and Vasilis Ntziachristos. Near-field radiofrequency thermoacoustic tomography with impulse excitation. Medical physics, 2010. [129] Hao Nan, Shiyu Liu, J Gabriel Buckmaster, and Amin Arbabian. Beamforming microwave- induced thermoacoustic imaging for screening applications. IEEE Transactions on Mi- crowave Theory and Techniques, 2018. [130] D. R. Bauer, X. Wang, J. Vollin, H. Xin, and R. S. Witte. Spectroscopic thermoacoustic imaging of water and fat composition. Appl. Phys. Letts., 2012. [131] Mark A Richards, Jim Scheer, William A Holm, and William L Melvin. Principles of modern radar. Citeseer, 2010. [132] Nadav Levanon, Itzik Cohen, Nadav Arbel, and Avinoam Zadok. Non-coherent pulse compression - Aperiodic and periodic waveforms. IET Radar, Sonar and Navigation, 2016. [133] M. Golay. Complementary series. IEEE Transactions on Information Theory, 1961. [134] Nadav Levanon. Noncoherent radar pulse compression based on complementary sequences. IEEE Transactions on Aerospace and Electronic Systems, 2009. [135] M. Xu and L. V. Wang. Pulsed-microwave-induced thermoacoustic tomography: Filtered backprojection in a circular measurement configuration. Med. Phy., 2002. [136] Y. Xie, B. Guo, J. Li, G. Ku, and L.V. Wang. Adaptive and robust methods of reconstruction. IEEE Transactions on Biomedical Engineering, 2008. [137] Xiong Wang, Daniel R Bauer, Russell Witte, and Hao Xin. Microwave-induced thermoacous- tic imaging model for potential breast cancer detection. IEEE Transactions on Biomedical Engineering, 2012. [138] Y. Hristorva, P. Kuchment, and L. Nguyen. On reconstruction and time reversal in ther- moacoustic tomography in acoustically homogeneous and inhomogeneous media. Inverse Problems, 2008. [139] P. Stefanov and G. Uhlmann. Thermoacoustic tomography with variable sound speed. Inverse Problems, 2009. [140] J. Qian, P. Stefanov, G. Uhlmann, and H. Zhao. An efficient neumann series-based algorithm for thermoacoustic and photoacoustic tomography with variable sound speed. SIAM J. Imaging Sci., 2011. [141] S.L. Crawford, S.R. R Doctor, A.D. D Cinson, and M.W. W Watts. Assessment of nde methods on inspection of hdpe butt fusion piping joints for lack of fusion with validation from mechanical testing. AccessScience, McGraw-Hill Companies, 2008. [142] Matthew S Prowant, Kayte M Denslow, Traci L Moran, Richard E Jacob, Trenton S Hartman, Susan L Crawford, Royce Mathews, Kevin J Neill, and Anthony D Cinson. Evaluation of ultrasonic phased-array for detection of planar flaws in high-density polyethylene (hdpe) butt-fusion joints. In Pressure Vessels and Piping Conference, 2016. 232 [143] Non destructive examination of pe welds: Emerging techniques. Technical report, Plastic industry pipe association of Australia limited, 2014. [144] Robert J Stakenborghs. Innovative technique for inspection of polyethylene piping base material and welds and non-metallic pipe repair. In ASME Pressure Vessels and Piping Conference, 2006. [145] Industry guidelines: assesment of polyethylene welds. Technical report, Plastic industry pipe association of Australia limited, 2015. [146] P. Barber and J. R. Atkinson. Some microstructural features of the welds in butt-welded polyethylene and polybutene-1 pipes. Journal of Materials Science, 1972. [147] P. Barber and J. R. Atkinson. The use of tensile tests to determine the optimum conditions for butt fusion welding certain grades of polyethylene, polybutene-1 and polypropylene pipes. Journal of Materials Science, 1974. [148] Paul Suetens. Fundamentals of medical imaging. Cambridge university press, 2017. [149] Viktor G Veselago. The electrodynamics of substances with simultaneously negative values of 𝜖 and 𝜇. Soviet Physics Uspekhi, 1968. [150] Lijuan Su, Javier Mata-Contreras, Paris Vélez, and Ferran Martín. A review of sens- ing strategies for microwave sensors based on metamaterial-inspired resonators: Dielectric characterization, displacement, and angular velocity measurements for health diagnosis, telecommunication, and space applications. International Journal of Antennas and Propa- gation, 2017. [151] Sears, F. W., M. W. Zemansky, and H. Young, D. University physics. Addison-Wesley, 16 edition, 1982. 233