DYNAMIC LED-BASED OPTICAL LOCALIZATION OF A MOBILE ROBOT By Jason N. Greenberg A DISSERTATION Michigan State University in partial fulfillment of the requirements Submitted to for the degree of Electrical Engineering – Doctor of Philosophy 2021 ABSTRACT DYNAMIC LED-BASED OPTICAL LOCALIZATION OF A MOBILE ROBOT By Jason N. Greenberg Autonomous mobile robots operating in areas with poor GPS and wireless coverage (e.g., under- water) must rely on alternative localization and communication techniques to navigate the field, share their data, and accomplish other missions. This dissertation is focused on the design of an LED-based optical localization system that achieves Simultaneous Localization and Communi- cation (SLAC), where the bearing angles, needed for establishing optical line-of-sight (LOS) for LED-based communication between beacon (base) nodes and a mobile robot, are used to triangulate and thereby localize the robot. A two-dimensional (2D) setup is considered in this work. First, the measurement process and procedural steps necessary for implementing the localization scheme are developed. Critical to the success of this scheme is the maintenance of the LOS, which is difficult due to the robot’s mobile nature. A Kalman filtering-based approach is proposed to predict the mobile robot’s position, allowing the system to reduce the overhead of establishing and maintaining the LOS, therefore significantly improving the quality of the localization and communication. The effectiveness of this approach is evaluated with extensive simulation and experiments, including a comparison to an alternative approach not using Kalman filtering-based location prediction. The initial design of the localization system involves two base nodes, which could result in a singularity problem in position measurement when the mobile robot is close to forming a collinear relationship with the base nodes. To address this issue, a setup involving more than two base nodes is considered, where one could dynamically change the base node pair for localization. An important design consideration for this approach is how to best exploit the redundancy in base nodes to provide robust localization. A sensitivity metric is introduced to characterize the level of uncertainty in the position estimate relative to the bearing angle measurement error, to dynamically select a desired pair of beacon nodes. The proposed solution is evaluated with simulation and experimentation, in a setting of three beacons nodes and one mobile node, and its efficacy is demonstrated via comparison with multiple alternative approaches. The aforementioned work assumes that the bearing angles with respect to all base nodes are captured simultaneously (or when the robot is at a single location). Consequently, because scanning for the light intensity to determine the bearing angle takes time, a stop-and-go motion has to be used to ensure that the robot is at a single location during the angle measurement process, which significantly slows the robot’s movement. To counter this issue, a scheme is proposed to dynamically localize a robot undergoing continuous movement, by exploiting the velocity prediction from Kalman filtering to properly correlate two consecutive measurements of bearing angles relative to the base nodes. Simulation and experiments show that, with this approach, the robot can be successfully localized when it is continuously moving. Dedicated to my parents, my brother, my wife, and my son for all of their patience and support over the years that have helped make this happen. iv ACKNOWLEDGMENTS I would like to start by expressing my warmest gratitude to my advisor, Prof. Xiaobo Tan, for providing me the opportunity to be a part of the Smart Microsystems Lab at Michigan State University (MSU) for my doctoral studies. His guidance and generous patience allowed me to explore research at a pace that worked best for my learning style. Through his mentorship I was able to develop new skills, both mental and physical, that have benefited me during my research studies and will continue to benefit me well into the future as well. I would like to thank my Ph.D. committee members: Prof. Daniel Morris, Prof. Arun Ross, and Prof. Vaibhav Srivastava for kindly joining my advisory committee and offering insightful suggestions on my research. I would like to thank the many friends and colleagues I have met and interacted with over the years, with their numerous suggestions and bits of inspiration. I would like to specifically thank Mohammed Al-Rubaiai, for designing and providing invaluable support for the optical transmitting and receiving hardware that is the core of my research. I would also like to mention Dr. Dhrubajit Chowdhury for his instrumental and inspiring discussions that helped to form the basis for one of the major designs of this work. I would also like to thank my fellow lab members of the Smart Microsystem Lab, both past and present, for their support and general comradery over the years: Dr. Feitian Zhang, Dr. Jianxun Wang, Dr. Sanaz Behbahani, Dr. Jun Zhang, Dr. Hong Lei, John Thon, Cody Thon, Danton Thon, Ali Abul, Hussein Hasan, Dr. Montassar Sharif, Dr. Osama Ennasr, Pratap Bhanu Solanki, Dr. Thassyo Pinto, Maria Castaño, Hongyang Shi, Demetris Coleman, Xinda Qi, Pearce Reickert, Eric Gaskell, Chandler Panetta, Claudia Chen, Austin Coha, Brendan Luecker, Paul Schulman, Robin Onsay, Robert Billette, Ryan Johnson, Lucas Wolfe, Hanish Mehta, Hanchen Xie, Tingyuan Zhang, Anthonios Doliotis, Yujie Hao, Patrick Muñoz, Dr. Scott O’Connor, Andrew Kim, Lei Fang, and Lexie Roberts. I would also like to thank fellow engineering classmates Dr. William Jensen and Thang Pham for their friendship and support as well. I would like to thank all of the administrators and ECE staff for their assistance and support. v In particular, I would like to mention Brian Wright for his advice, spare parts, and fabrication help, as well as Roxanne Peacock, Meagan Kroll, Laurie Rashid, Michelle Stewart, and Jennifer Woods. I would also like to thank the technical support staff at DECS for helping me with all of the computer troubles I have had over the years, as well as the network administrator for the College of Engineering, Adam McDougall, for helping to establish a special internet connection for the embedded boards that I used in all of my experiments, so to minimize connection loss during trials. I would like to thank the National Science Foundation (Grants IIS 1319602, CCF 1331852, ECCS 1446793, IIS 1734272, IIS 1715714, IIS 1848945) for the financial support of my research. Finally, I would like to thank my parents, Cindy and Alan, without whom my life would not be possible; my brother, Brandon, for his encouragement and support; my mother-in-law and father-in- law, Barbara and Fred, for their encouragement, support, and experienced advice; and lastly I would like to thank my wife, Eve, and our son, Isaac, for being so unbelievably patient and supportive of my efforts to continue my education, as well as being the push that kept me going to the end. vi TABLE OF CONTENTS . . . . . . . . . . . . . . . . 1.1 Literature Review . INTRODUCTION . . . . . . . . . . . . . . . . . . . . . . . . . . . . . LIST OF TABLES . LIST OF FIGURES . CHAPTER 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.1.1 Localization Techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.1.2 Triangulation . . 1.1.3 Triangulation Sensitivity Analysis . . . . . . . . . . . . . . . . . . . . . 1.1.4 Localization with Kalman Filtering . . . . . . . . . . . . . . . . . . . . 1.1.5 Underwater Localization and Communication . . . . . . . . . . . . . . 1.1.6 LED-based Communication . . . . . . . . . . . . . . . . . . . . . . . . 1.1.7 LED-based Localization and Communication . . . . . . . . . . . . . . . 1.2 Overview of Contribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2.1 Dynamic Localization Using Position Prediction . . . . . . . . . . . . . Sensitivity-based Data Fusion for Localization . . . . . . . . . . . . . . 1.2.2 1.2.3 Localization of a Mobile Robot During Continuous Motion . . . . . . . 1.3 Dissertation Organization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . CHAPTER 2 DYNAMIC LOCALIZATION USING KALMAN FILTERING-BASED ix x 1 1 1 2 2 3 3 4 4 6 6 7 8 8 . . . . . 2.4.1 2.4.2 2.4 Simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . POSITION PREDICTION . . . . . . . . . . . . . . . . . . . . . . . . . 10 2.1 Basic Concept of Optical Localization . . . . . . . . . . . . . . . . . . . . . . . 10 2.2 The Kalman Filter Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 2.3 System Implementation . . 14 2.3.1 Localization Procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 Scanning Procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 2.3.2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 . . Simulation Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 Simulation Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 Impact of Angle Measurement Error . . . . . . . . . . . . . . 19 2.4.2.1 Impact of the Error in Initial MN Position Estimation . . . . . 22 2.4.2.2 2.4.2.3 Justification of Gaussian Noise in Position Measurement Error 22 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 . . 2.5 Experiment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Setup . . . . . 25 2.5.1 2.5.2 The Effect of Kalman Filter-based Position Prediction . . . . . . . . . . 29 2.5.3 Results - Localization around a Closed Loop . . . . . . . . . . . . . . . 31 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 2.6 Summary . . . . . . . . . . . . . . CHAPTER 3 SENSITIVITY-BASED DATA FUSION FOR MOBILE ROBOT LO- . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 3.1 The Two-Base-Node Localization Approach . . . . . . . . . . . . . . . . . . . . 35 CALIZATION . . vii . . . . . . . 3.1.1 Measurement Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 3.1.2 The Kalman Filtering Algorithms . . . . . . . . . . . . . . . . . . . . . 36 Position-based Kalman Filtering . . . . . . . . . . . . . . . . . 37 3.1.2.1 3.1.2.2 Angle-based Extended Kalman Filtering . . . . . . . . . . . . 38 3.2 Sensitivity Metric-based Data Fusion . . . . . . . . . . . . . . . . . . . . . . . 40 3.3 Simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 3.3.1 Alternative Approaches . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 3.3.1.1 Variable-R with Minimal Sensitivity . . . . . . . . . . . . . . 43 Fixed-R with Averaging . . . . . . . . . . . . . . . . . . . . . . 44 3.3.1.2 3.3.1.3 Variable-R with Averaging . . . . . . . . . . . . . . . . . . . . 44 3.3.1.4 Extended Kalman Filtering Scheme . . . . . . . . . . . . . . . 45 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 Simulation Setup . Simulation Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54 3.4 Experiment . . Setup . 3.4.1 3.4.2 Results . 3.5 Summary . 3.3.2 3.3.3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . CHAPTER 4 DYNAMIC LOCALIZATION OF A ROBOT IN CONTINUOUS MOTION . 4.3.1 4.3.2 4.3.3 4.3 Simulation . 4.2 Proposed Approach . 4.2.1 Measurement Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55 4.1 Overview of the LED-Based Localization Process . . . . . . . . . . . . . . . . . 56 4.1.1 Measurement Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56 4.1.2 The Kalman Filtering Algorithms . . . . . . . . . . . . . . . . . . . . . 56 4.1.3 Challenges with Continuously Moving Robot . . . . . . . . . . . . . . . 58 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58 . . . . . . . . . . . . . . . . . . . . . . . . . . 60 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63 . . . . . . . . . . . . . . . . . . . . . . . . . 63 . 65 Impact of Body Orientation Angle Measurement Error . . . . 65 Impact of the Robot’s Velocity . . . . . . . . . . . . . . . . . . 68 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78 CHAPTER 5 CONCLUSION AND FUTURE WORK . . . . . . . . . . . . . . . . . . 79 . 79 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80 . . 81 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Simulation Setup . Simulation Measurements Simulation Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3.3.1 4.3.3.2 4.4 Experiment . . Setup . 4.4.1 4.4.2 Results . 5.1 Concluding Remarks 5.2 Future Work . . . . BIBLIOGRAPHY . 4.5 Summary . . . . . . . . . . . . . . . . . . . . . . . . . viii LIST OF TABLES Table 3.1: Summarized experimental results from the three trials of each algorithm variation. The results include the mean and standard deviation of the estimated position error magnitude. . . . . . . . . . . . . . . . . . . . . . . . 50 Table 4.1: Summarized experimental results from the trials of each algorithm. The results include the mean and standard deviation of the estimated position error magnitude and the estimated velocity error magnitude. . . . . . . . . 73 ix LIST OF FIGURES Figure 2.1: Illustration of base nodes BN1 and BN2 calculating the position of the . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . target node. . . 11 Figure 2.2: Comparison of the ground truth and estimated positions for one of the simulated trials using the simple loop trajectory, for the case when the measurement angles of the base nodes are subjected to Gaussian noises with a standard deviation of 0.5◦. . . . . . . . . . . . . . . . . . . . . . . . 18 Figure 2.3: Comparison of the ground truth and estimated positions for one of the simulated trials using the figure-eight trajectory, for the case when the measurement angles of the base nodes are subjected to Gaussian noises with a standard deviation of 0.5◦. . . . . . . . . . . . . . . . . . . . . . . . 18 Figure 2.4: Illustration of base nodes BN1 and BN2 calculating the position of the . . . . target node. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 Figure 2.5: Average error computed among all of the simulated trials for the sim- ple loop trajectory for varying amounts of standard deviation in the Gaussian noise added to the angular measurements of the base nodes. . . . 20 Figure 2.6: Bar graph showing how many of the 100 simulated trials for the sim- ple loop trajectory were able to track the complete trajectory for each level of standard deviation of the Gaussian noise added to the angular measurements of the base nodes. . . . . . . . . . . . . . . . . . . . . . . . . 20 Figure 2.7: Average error computed among all of the simulated trials for the figure- eight trajectory for varying amounts of standard deviation in the Gaus- sian noise added to the angular measurements of the base nodes. . . . . . . 21 Figure 2.8: Bar graph showing how many of the 100 simulated trials for the figure- eight trajectory were able to track the complete trajectory for each level of standard deviation of the Gaussian noise added to the angular mea- surements of the base nodes. . . . . . . . . . . . . . . . . . . . . . . . . . . 21 Figure 2.9: Bar graph showing how many of the 100 simulated trials for the simple loop trajectory were able to track the complete trajectory for each level of standard deviation of the Gaussian noise added to the initial position of the mobile node. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 x Figure 2.10: Bar graph showing how many of the 100 simulated trials for the figure- eight trajectory were able to track the complete trajectory for each level of standard deviation of the Gaussian noise added to the initial position of the mobile node. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 Figure 2.11: Error distribution of the mobile node’s position in terms of  and  when the angular measurements are corrupted with Gaussian noise of zero-mean and a standard deviation of 1.96◦. . . . . . . . . . . . . . . . . . 24 Figure 2.12: Error distribution of the mobile node’s position in terms of  and  when the angular measurements are corrupted with noise that is uniformly distributed between [−3.43◦, 3.43◦]. . . . . . . . . . . . . . . . . . . . . . . 24 Figure 2.13: Hardware implementations used for the experiments in this work. a) Hardware components on each of the nodes. b) The implementation of based nodes are mounted onto a 80/20(cid:114) metal beam and communicate data between each other using a physical UART line. c) The mobile node implementation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 . . Figure 2.14: Overhead view of the grid floor used in experiments. In the image the base nodes are connected to 80/20(cid:114) metal beam that was 64 in. (1.63 m). The positive  and  axes run from left to right and down to up respectively. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 . . Figure 2.15: Trajectory plot of one of the trials where prediction of the mobile node’s position was not used for localizing. The system perceives its location to be the measured position computed from the bearing angles. . . . . . . . . 30 Figure 2.16: Trajectory plot of one of the trials where Kalman filter-based predic- tion of the mobile node’s position was used for localizing. The system perceives its location to be the Kalman filtering-based estimated position. . 30 Figure 2.17: Comparison of the ground truth and estimated positions of the experiment. 32 Figure 2.18: Measured and estimated position error for each step of the trajectory. . . . 32 Figure 2.19: Measured angle error is within the limits described by the simulation analysis. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 Figure 3.1: Illustration of the spatial sensitivity of the two base node measurement function, half symmetry. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 Figure 3.2: Illustration of the spatial sensitivity of the two base node measurement function, full symmetry. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 xi Figure 3.3: Illustration comparing the spatial sensitivity of the two base node mea- surement function when the base nodes are separated by a distance of 4 and 10 grid units. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 . Figure 3.4: Illustration of the single loop trajectory with the base nodes, BN1, BN2, and BN3, located at the coordinates [−3,−3]T, [0, 0]T, and [3,−3]T, respectively. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46 . . . . . Figure 3.5: Line graph showing the average estimated position error for four of the algorithm variations under each level of standard deviation for the angular measurements. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46 Figure 3.6: Simulation results from one of the trials of the angle-based extended Kalman filtering approach in which the angular measurements experi- enced noise with a standard deviation of 0.5◦. The graphs compares the  and  coordinates of the estimated and ground truth positions for each . . . . . . . . the trajectory steps reached by the system during the trail. . 48 Figure 3.7: Simulation results from one of the trials of the angle-based extended Kalman filtering approach in which the angular measurements experi- enced noise with a standard deviation of 0.5◦. The graphs compares the  and  coordinates of the estimated and ground truth velocities for each the trajectory steps reached by the system during the trail. . . . . . . . . . 48 Figure 3.8: Overhead view of the grid floor used in experiments. . . . . . . . . . . . . . 50 Figure 3.9: Experimental results from one of the trials of the proposed minimal sensitivity with fixed-R approach. The plot shows the MN’s estimated position plotted against the ground truth. . . . . . . . . . . . . . . . . . . . 52 Figure 3.10: Experimental results from one of the trials of the alternative minimal sensitivity with variable-R approach. The plot shows the MN’s estimated position plotted against the ground truth. . . . . . . . . . . . . . . . . . . . 52 Figure 3.11: Experimental results from one of the trials of the alternative averaging with fixed-R approach. The plot shows the MN’s estimated position plotted against the ground truth. . . . . . . . . . . . . . . . . . . . . . . . . 53 Figure 3.12: Experimental results from one of the trials of the alternative averaging with variable-R approach. The plot shows the MN’s estimated position plotted against the ground truth. . . . . . . . . . . . . . . . . . . . . . . . . 53 Figure 4.1: Illustration comparing the traditional approach and the proposed dynamic- prediction approach, for the case when 1 and 2 are at spots Pa and Pb, respectively. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59 . . . . . xii Figure 4.2: Illustration of the moving measurement approach, which shows the sit- uation when 1 and 2 are at spots Pb and Pa, respectively. . . . . . . . . . 62 Figure 4.3: Comparison of the ground truth and estimated positions for one of the simulated trials using the proposed dynamic-prediction approach, for the case when the body orientation measurement angle was subjected to Gaussian noises with a standard deviation of 1.0◦ and the MN’s veloc- ity was 0.27 (grid units/s). The timestamp associated with every other estimated position is shown slightly above the corresponding markers to indicate the progression of time. . . . . . . . . . . . . . . . . . . . . . . . . 64 Figure 4.4: Comparison of the ground truth and estimated positions for one of the simulated trials using the traditional approach, for the case when the body orientation measurement angle was subjected to Gaussian noises with a standard deviation of 1.0◦ and the MN’s velocity was 0.27 (grid units/s). The timestamp associated with every other esti- mated position is shown slightly above the corresponding markers to indicate the progression of time. . . . . . . . . . . . . . . . . . . . . . . . . 64 Figure 4.5: Plot of the mean and standard deviation of the estimated position errors computed among all of the trials for varying amounts of standard devia- tion in the Gaussian noise added to the body orientation measurements of the mobile node. a) Average error for the traditional approach; b) Average error for the proposed dynamic-prediction approach. . . . . . . . 66 Figure 4.6: Plot of the mean and standard deviation of the estimated velocity errors computed among all of the trials for varying levels of noise to the body orientation angle measurements. a) Average error for the traditional approach; b) Average error for the proposed dynamic-prediction approach. 66 Figure 4.7: The average number of angle measurements captured during the trials across each of the different levels of noise to the body orientation angle measurements, for both the dynamic-prediction and traditional approaches. 67 Figure 4.8: Plot of the mean and standard deviation of the estimated position errors computed across each of the different levels of MN velocity. a) Average error for the traditional approach; b) Average error for the proposed dynamic-prediction approach. . . . . . . . . . . . . . . . . . . . . . . . . . 69 Figure 4.9: Plot of the mean and standard deviation of the velocity errors computed across each of the different levels of MN velocity. a) Average error for the traditional approach; b) Average error for the proposed dynamic- prediction approach. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69 xiii Figure 4.10: The average number of angle measurements captured during the trials across each of the different levels of MN velocity, for both the dynamic- prediction and traditional approaches. . . . . . . . . . . . . . . . . . . . . . 70 Figure 4.11: The mobile node implementation used with two Intel(cid:114) Edison Boards. . . 72 Figure 4.12: Overhead view of the grid floor used in experiments. . . . . . . . . . . . . . 72 Figure 4.13: Comparison of the estimated position against the ground truth position over time for one of the trials which used the dynamic-prediction approach. 74 Figure 4.14: Comparison of the estimated velocity against the ground truth velocity over time for one of the trials which used the dynamic-prediction approach. 74 Figure 4.15: Comparison of the estimated position against the ground truth position over time for one of the trials which used the traditional approach. . . . . . 75 Figure 4.16: Comparison of the estimated velocity against the ground truth velocity over time for one of the trials which used the traditional approach. . . . . . 75 Figure 4.17: Bar graph showing the number of scanned measurements for each trial using the proposed dynamic-prediction measurement approach. . . . . . . 76 Figure 4.18: Bar graph showing the number of scanned measurements for each trial using the traditional measurement approach. . . . . . . . . . . . . . . . . . 76 Figure 4.19: Comparison of the ground truth and estimated positions for one of the experiment trials using the proposed dynamic-prediction approach. The timestamp associated with every other estimated position is shown slightly above the corresponding markers to indicate the progression of time. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77 xiv CHAPTER 1 INTRODUCTION Localization and communication are essential functions to any sensor network, thus it would be considerably valuable to the design of resource-limited robots if both tasks could be achieved using the same mechanism, to accomplish Simultaneous Localization and Communication (SLAC). In this chapter, we start with a brief discussion on the many varieties of localization. Then the focus of the discussion is narrowed down to the many approaches of triangulation, which is followed by touching upon some of the existing works that have approached sensitivity analysis of triangulation. Next the challenges and drawbacks to achieving underwater localization and communication using the current standards are discussed, followed by an introduction to the promising new optical approach to underwater localization and communication and the current state of this field. Afterwards an overview of the contributions of the proposed LED-based optical system is presented. 1.1 Literature Review 1.1.1 Localization Techniques Autonomous robots rely on accurate positioning for essential functions such as navigation, data collection, and environmental monitoring [1, 2, 3]. GPS, which is arguably the most common tool for localization, is not available in all environments, such as underwater and indoors [4, 3, 5]. There exist many approaches to solving this challenge and they vary based on factors such as the type of data used, how the data is captured, and the algorithm that converts the measured data into position. For example, some of the varieties of observed data include distance, angle and signal strength measurements, which can be captured by sensors such as sonar scanners, RF antenna, inertial sensors, and optical-based sensors, and then processed with techniques like SLAM (Simultaneous Localization and Mapping), dead-reckoning, triangulation, and trilateration [6, 7, 8, 9, 10, 3]. 1 1.1.2 Triangulation Of these various approaches, group-based approaches offer a unique perspective in that the local- ization is achieved through a collaborative effort where the various members of the group share their data in order to localize. Triangulation is one of these collaborative approaches, in which the angles relative to several neighbors with known positions, often referred to as beacons (or base nodes), are used to localize the individual robot [11, 12]. Localization through triangulation is, itself, a large field with many implementation techniques in both robotics and surveying engineering, with the latter field referring to triangulation as the three-point resection problem [13, 14, 15]. In their comprehensive review of the subject Pierlot and Van Droogenbroeck grouped these many approaches into four general categories, Geometric Circle Intersection, Iterative methods, Geometric Triangulation, and Multiple Beacons Triangulation, with one of the more commonly used approaches being Geometric Circle Intersection [16, 17, 18]. In this particular type of triangulation, two arcs are derived from the bearing angles that are between three beacons and a target. Each arc spans between one unique pairing of the beacons and passes through all possible coordinates of the target. Thus the intersection point of two of these arcs leads to the position of the target [13, 16, 17, 19]. A form of Geometric Triangulation is implemented by Sergiyenko and coworkers in [20, 21, 22] by finding the bearing angles of a laser transmitter and a receiver necessary for the receiver to detect the light from the transmitter that is deflected off the object of interest. The transmitter and receiver units are mounted on a beam at a fixed distance apart; this combined with measured angles allows for the Laws of Sines to be used to triangulate the coordinates of the point of deflection on the object. 1.1.3 Triangulation Sensitivity Analysis Sensitivity or error analysis of triangulation methods to improve certain aspects of the system has been done previously [13, 23, 24]. Tekdas and Isler [23] used a common uncertainty function to implement a beacon placement algorithm and Font-Llagunes and Batlle [13] presented an 2 error analysis of their triangulation method to also generate a preferred configuration of beacons. However, in both works the analysis is used to only create a static solution and is not used to dynamically improvise the measurement process to adapt to changes in robot location. A dynamic approach was developed by Madsen et al. [24] for their robot self-positioning triangulation technique, in which the position of the robot was computed from the angles of separation between the landmarks, i.e., points of interest, detected by a camera. Their method employed a specially designed metric that characterized the relationship between the error in locating the positions of landmarks in the captured image and the error in the computed robot position. This metric would then be used to find the best landmark triplet, within an environment that had many landmarks, to best localize the robot at each point in its trajectory. 1.1.4 Localization with Kalman Filtering Due to the mobility of a robot, triangulation alone is not enough to effectively track its position since the measurements could be noisy corresponding to different poses [13, 25]. Consequently it is common to use an estimation technique to determine a more refined location of the robot, by filtering a measured position with a predicted position based on previous estimations [13, 26, 27, 28]. There are several estimation tools which could be used; however, Kalman filtering-based methods are often preferred due to their simplicity, fast responsiveness, minimal storage requirements, and low computation costs [28, 29, 30]. There are many examples and variations of Kalman filtering being used for position estimation in literature; for instance, Rana et al. [31] used Kalman filtering to estimate the position and velocity of a 2D-moving object for video surveillance, and Feng et al. [32] used a Kalman filter to predict the future location of a vehicle. 1.1.5 Underwater Localization and Communication A noticeable advantage of group-based localization methods is an ability to be functional in envi- ronments where GPS signals are inaccessible such as indoors and underwater. Of the handful of techniques that can be used underwater, many are implemented with the use of acoustic signals, 3 which is currently the predominant means for underwater localization and communication. How- ever, acoustic approaches tend to further complicate or constrain the localization algorithm due to the inherent limited bandwidth, long propagation delays, and multipath effect, which result in low data rates and low signal reception reliability [33, 34, 35]. Moreover, devices that implement acoustic-based methods tend to be bulky and power-hungry, making them unsuitable for small underwater robots with limited resources [36]. 1.1.6 LED-based Communication Optical communication systems based on Light-Emitting Diodes (LEDs) are an up-and-coming alternative to acoustic-based methods. In recent years, LED systems have shown promise in high-rate, low-power underwater communication over short-to-medium distances [37, 38, 39]. For example, the system developed by Brundage [40], which used a Titan blue lighting LED, achieved communication rates of roughly 1 Mb/s at a distance of 13 m, while Doniec and Rus [41] demonstrated data rates of 4 Mb/s at a distance of 50 m with their communication system, AquaOptical II, which used an array of 18 Luxeon Rebel LEDs for the transmitter. However, a downside of LED-based communication is the requirement of near line-of-sight (LOS) between the transmitter and the receiver. The latter challenge has been addressed in several ways, including the use of redundant transmitters/receivers [42, 43, 44, 45] and active alignment [38, 46, 47]. 1.1.7 LED-based Localization and Communication Indoor LED-based optical localization and communication systems have been developed by using visible light communication (VLC) systems, in which the overhead lights used to illuminate the room can also be used as the transmission medium for both data and localization purposes [48, 49, 5]. Nguyen et al. [50] developed a VLC localization approach that integrates the angle of arrival (AOA) and received signal strength (RSS) of the light to compute the location, getting a minimum simulated error of 10 cm. Qiu et al. [48] achieved a localization accuracy of 0.56 m using a fingerprint matching approach, where fingerprints are a mapping of position and the light intensities 4 of each light in the environment, and each light transmits a unique beacon pattern allowing the localizing robot to associate a light intensity with a particular overhead fixture. While VLC- based localization approaches are an alternative to radio-frequency methods indoors and can work underwater in theory [51, 52], they are not practical for a typical aquatic environment due to the difficulty in illuminating the significantly larger and more complex environment. An alternative form of LED-based optical localization is through the use of cameras as the means for capturing the light. For instance, while Nguyen et al. [50] used an array of photodiodes in their VLC approach, Zachár et al. [5] and Liang et al. [49] used cameras to identify the light in their works. Cameras can also be used for LED-based optical localization in other ways too. For instance, Giguere et al. [53] used cameras mounted on several robots which interacted in a cooperative manner to derive the relative position and orientation relative to each of the robots based on the LED landmarks mounted on each robot. Suh et al. [54] proposed a similar approach where a group of robots with cameras localized themselves by splitting the robots into alternating groups of stationary and moving robots. The stationary robot would track the LED markers on the mobile robots using a multiview geometry. However, implementing camera-based techniques underwater is challenging, due to various degradation problems with obtaining the images, such as light absorption and scattering [55, 56]. While there are techniques for enhancing the imaging quality (for example, histogram equalization), these involve additional processing that is simply not needed when using photodiodes. An underwater LED-based localization and communication system was presented by Rust and Asada in [44]. This approach relies on a nonlinear light intensity model to calculate the distance between the transmitting LED and the receiving photodiode. However, such an approach is prone to error since light intensity depends on both distance and receiver-transmitter alignment. The method in [44] also uses a photodiode array to determine the angle of the light source, which increases the size and complexity of the system. In this dissertation, an alternative concept of LED-based Simultaneous Localization and Com- munication (SLAC) is proposed, where the line of sight (LOS) requirement in LED-based commu- 5 nication is exploited to extract the relative bearing of the two communicating parties. Such bearing information for a mobile robot, with respect to multiple nodes with known locations (called base nodes), can then be used to infer the location of the robot via triangulation. To cope with the mobile nature of the robot, we further propose the use of a Kalman filter to predict the position of the robot, to facilitate the establishment of the LOS between the base nodes and the mobile robot. 1.2 Overview of Contribution The contributions of this research reside within the extensive design work of the proposed LED- based Simultaneous Localization and Communication system for terrestrial use with the overall intention that its design would be readily extended to an underwater setting. The details of these contributions are as follows. 1.2.1 Dynamic Localization Using Position Prediction First, the design of an LED-based system that achieves Simultaneous Localization and Communi- cation (SLAC), where the line of sight (LOS) requirement for communication is exploited to extract the relative bearing of the communicating parties for localization, is presented. A key contribution of this design is the proposal of Kalman filter-based position prediction of the mobile robot, to reduce the overhead of establishing the LOS, thereby significantly improving on the quality of the localization. The effectiveness of this proposed optical localization system is demonstrated with extensive simulation and experiments, with a comparison with an alternative approach not using Kalman filtering-based location prediction. In particular, the robustness of the proposed approach with respect to the LOS angle measurement error and the uncertainty in the robot’s initial position is assessed in simulation. Simulation results show that the system can localize effectively when the angle measurements have an error with a standard deviation of 3.0◦ or less. Experimental results show that the system is able to consistently localize the mobile node and maintain tracking of the robot indefinitely. In contrast, a version of the approach that does not use the Kalman filtering-based 6 position prediction is only able to localize the mobile node for a relatively small number of steps of the trajectory before losing track of its position. 1.2.2 Sensitivity-based Data Fusion for Localization Second, we consider the design of a localization system which uses a group of (more than two) base nodes, with the main focus set on finding the best way to fuse the data from the multiple sources so to enhance the positioning accuracy. One of the goals of this design is to address the singularity issue that arises with the position measurement when the mobile robot becomes close to collinear with a pair of base nodes. The redundancy from the extra nodes allows for alternative base node pairings when one or more pairings are in a collinear configuration with the robot. An important consideration of this design is how to fuse the additional bearing angle information effectively. Thus this design proposes the use of a sensitivity metric, which represents how sensitive a triangulated position is with respect to the bearing measurement error, to choose a pair of base nodes for triangulation that are the most robust at that time instance. In particular, the base node pairing with the lowest sensitivity metric is chosen to compute the target’s position using the corresponding measured bearings. The resulting position is then fed into the Kalman filter to predict the target’s next position, which is critical for facilitating the LOS establishment for the next round of communication and localization. The proposed solution is evaluated with simulation and experimentation, in a setting of three beacons nodes and one mobile node. Both the simulation and experimentation compare the performance of the proposed sensitivity metric-based approach against three alternatives methods. These three approaches include a different means of data fusion in the form of averaging the triangulated target positions computed from the bearing angles of each base node pairing, a variation to the filtering scheme by incorporating the current angle measurements in the measurement noise covariance matrix, and a combination of the averaging fusion technique with this filtering scheme variation. A fourth alternative approach is also examined in simulation, in which the captured bearing angles are used directly as the system output and are nonlinear functions of the states, 7 thereby entailing the use of an extended Kalman filtering (EKF) scheme [57]. Results show that the proposed sensitivity metric-based approach outperforms the alternative variations and achieves an average estimated position error of roughly 0.18 grid units in experi- ments, whereas the alternative averaging and filtering scheme approaches achieve average estimated position errors of approximately 0.19, 0.28, and 0.25 grid units in experiments, respectively. The alternative EKF-based approach yielded relatively poor results with the estimated velocity diverg- ing away from the ground truth fairly quickly, thereby causing the system to consequently fail at maintaining the LOS. 1.2.3 Localization of a Mobile Robot During Continuous Motion The third and final contribution is the design of an LED-based localization which is capable of capturing the position of the robot while it is continuously moving. In particular, the proposed approach takes advantage of the estimated velocity from the Kalman filter, to properly correlate the two consecutive measurements of bearing angles with respect to the two base nodes for the position computation. In contrast to the previous works, this approach also now uses for the first time a rigid-body model to more accurately estimate the robot’s movement. The effectiveness of this proposed optical localization system is demonstrated with simulation and experimentation in a two-dimensional setting, with a comparison to an alternative approach which does not use the predicted velocity method, i.e., the traditional approach used in our previous works. In particular, the simulation analyses the systems performance over a range of noise levels for the body orientation measurement and a variety of different velocities of the robot. Results from both simulation and experiments show that the proposed dynamic-prediction method does consistently better than the traditional method. 1.3 Dissertation Organization The remainder of this dissertation is organized as follows. In Chapter 2, the design of our two base-node LED-based optical localization system is presented and the importance of using Kalman 8 filtering to predict the future positions of the robot is highlighted. The design of multi-base-node system which uses our proposed sensitivity metric for data fusion is shown next in Chapter 3. This is followed by Chapter 4, which discusses the approach for localizing a continuously moving robot. Finally, concluding remarks are provided in Chapter 5. 9 CHAPTER 2 DYNAMIC LOCALIZATION USING KALMAN FILTERING-BASED POSITION PREDICTION 2.1 Basic Concept of Optical Localization To simplify the discussion, in this work, the localization approach is considered in the two- dimensional space. It is assumed that each node is equipped with an optical transceiver comprised of an LED transmitter and a photodiode receiver, and that the transceiver is able to rotate a full 360◦. Furthermore, the node is able to identify at any particular moment the angle at which its transceiver is facing with respect to a reference direction such as the east axis identified by a magnetic compass. Consider a three-node network composed of a pair of base nodes (with known locations) and a mobile node to be localized, as illustrated in Figure 2.1. Through the LOS measurement, the base nodes, denoted as BN1 and BN2, respectively, determine the bearing angles of the mobile node (MN) with respect to a common -axis, denoted as 1 and 2. The location of the mobile node can then be found using the locations of BN1 and BN2 and the bearing angles 1 and 2:    = 1 + |1| cos 1  where(cid:2), (cid:3) is the position vector of the mobile node MN, 1 and 1 are the respective 1 + |1| sin 1 (2.1) − and − coordinate for BN1 and |1| is the magnitude of vector 1 shown in Figure 2.1 and is obtained using the Laws of Sines: |1| =  sin( ¯2) sin() (2.2) Here  is the distance between BN1 and BN2, ¯2 is the complement of 2, ¯2 = 180◦ − 2, and  is the angle corresponding to the side BN1-BN2 within the MN-BN1-BN2 triangle,  = 2 − 1. Although this localization process seems simple, the task is involved, especially when the target is mobile. The challenge comes from the need to have sufficient synchronization and coordination 10 Figure 2.1: Illustration of base nodes BN1 and BN2 calculating the position of the target node. among all three nodes to produce proper LOS measurements. Otherwise the mobile node could be “spotted” (via LOS) too infrequently, not at all, or not nearly simultaneously by both base nodes. Another challenge results from the error in the measured 1 and 2 – purely relying on the algebraic calculation (2.1) will lead to highly variable (instead of smooth) estimated trajectories for the mobile node MN. To help address both challenges, Kalman filtering is proposed for predicting and estimating the location of the MN, based on the measured location computed via (2.1). In particular, the prediction of the MN location is exploited to significantly reduce the effort searching for LOS and thus enable efficient and accurate localization. The Kalman filtering algorithm is presented next. 2.2 The Kalman Filter Algorithm Kalman filtering is a powerful and computationally efficient technique for estimating the state of linear systems with Gaussian noises [58, 59, 60] and it has been widely adopted for estimation and control in various applications including robotics. The main focus of this work is the maintenance of the line of sight (LOS) between the base nodes and the mobile node. This is achieved using a Kalman filter to predict the future location of the robot in order to generate anticipated angles for transceiver orientation. The mobile node’s dynamics are assumed to be sufficiently described by a 11 constant velocity model corrupted with Gaussian noise, since it is not practical for the base nodes to have precise prior knowledge of the mobile node’s movement. While other filtering schemes can be potentially used, our assumption enables the use of computationally efficient Kalman filtering for predicting the mobile node’s position. As demonstrated later in this chapter, these predictions are crucial for efficient establishment of LOS measurement and thus the success of the localization scheme. The dynamics for the mobile node can be represented as: +1 =  + Ɗ + 1, +1 =  + 2, (2.3) where  = (cid:2), , ,(cid:3) and  = (cid:2), , ,(cid:3) are the position and velocity vectors of the (2.4) mobile node at the −th time instance, 1, and 2, are independent, zero-mean, white Gaussian noises, and Ɗ is the −th sampling interval. The observation z is the noise-corrupted location measurement, computed based on (2.1) – (2.2) z =  + 3, , (2.5) where 3, is assumed to be white, zero-mean Gaussian, and independent of the process noises 1, and 2,. The state vector ˆx of the Kalman filter is defined as ˆx =(cid:2) ˆ, ˆ, ˆ, ˆ(cid:3) (2.6) where [ ˆ, ˆ] and [ˆ, ˆ] are the estimated position and velocity, respectively of the mobile node. The Kalman filter uses the following noiseless version of the mobile node’s motion model to generate the next predicted position and velocity of the node: ˆx−  = −1ˆx−1 ˆP−  = −1 ˆP−1  + −1 12 (2.7) (2.8)  is the a priori state estimate, ˆP− where ˆx−  is the predicted state covariance matrix,  is the process noise covariance matrix, and  is the matrix form of the mobile node’s motion model and is defined as: Based on the measurement, the state estimate is updated as follows: (2.9) (2.10) (2.11)   = 1 0 Ɗ 0 1 0 0 0 0 0 0 Ɗ 0 1 0 1  (cid:17)−1  +  (cid:17) (cid:16)H (cid:16)z − H ˆx− ˆP−  H   H   + K K = ˆP− ˆx = ˆx− ˆP = (I − K H) ˆP−  1 0 0 0 0 1 0 0  H = ˆ, = cos−1(cid:169)(cid:173)(cid:173)(cid:171)  ·  (cid:12)(cid:12)(cid:12) (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) (cid:12)(cid:12)(cid:170)(cid:174)(cid:174)(cid:172) , 13 where K is the Kalman gain, R is the measurement noise covariance matrix, ˆP is the posterior estimate of the system’s covariance matrix, and H is the observation matrix: At time , the base nodes perform an angular search process to ultimately generate the obser- vation z to be used in the state estimate update in (2.9)–(2.11). The angular search process for each base node is centered about the anticipated values of 1 and 2 (recall Figure 2.1) computed from the position component of the predicted state estimate ˆx−  . In particular, these anticipated angles, ˆ1,, ˆ2, are computed by using: for  = 1, 2 (2.12) where, 1 = 2 = 0 1 0   − 1  2   − ˆ− ˆ− 2  , (2.13) (2.14) (2.15)   = for  = 1, 2 Here(cid:2)1, 1(cid:3) and(cid:2)2, 2(cid:3) are the locations of the base nodes BN1 and BN2, respectively   and  ·  is the dot product between vectors  and  . The mobile node, in the meantime, will use its predicted position to calculate the angular locations of the base nodes relative to itself, and focus its light along these angles during the angular search. 2.3 System Implementation 2.3.1 Localization Procedure The proposed localization method uses the following 5-step procedure: 1. Synchronization : The mobile node waits until it receives an optical message from one of the base node. The transmitted message contains the latest state estimate from the Kalman filter, i.e., both the position and velocity of the mobile node. 2. Movement : Upon receiving the message, the mobile node moves along its predetermined trajectory for a fixed amount of time and stops. 3. Measurement : The mobile node uses its recently received state estimate and orientation data to approximate its current position and the angular locations of the base nodes. It will then use this information to shine its LED light at the base nodes. On the other hand, the base 14 nodes use the state estimate to predict the mobile node’s next position and the corresponding angles ˆ1,+1 and ˆ2,+1 from (2.12) – (2.15) so each base node can scan the light coming from the mobile node. 4. Update : After scanning, the collected angles 1 and 2 are used to generate the observed position z through (2.1) – (2.2), which is then used to update the state vector. 5. Repeat : Repeat steps 1 – 4. At the start of the program, before the first synchronization sequence, it is assumed that all of the nodes have knowledge of the initial position and velocity of the mobile node, but such knowledge could have error. This chapter will study the impact of the initial estimate error on the system performance. The mobile node’s stop-and-go movement ensures that its position is the same for each of the base nodes’ scans. This is crucial to the localization accuracy since the measurement equations in (2.1) – (2.2) assume that the observed angles correspond to the mobile node at a single location. Overall, it is important to point out that, because of the synchronization process, this approach is able to keep a relatively constant sampling time between the measurements despite the stop-and-go movement from the mobile node. In particular, the Kalman filter is able to capture reasonably well the average velocity of the robot. 2.3.2 Scanning Procedure The base node scanning procedure, mentioned above, is the light searching measurement process executed individually on each base node to obtain the angle of the mobile node relative to each base node that will be collectively used in (2.1) – (2.2) to generate the mobile node’s current position. For each base node, the process involves four consecutive sweeps, composed of 2 pairs of clockwise then counter-clockwise sweeps, about the anticipated angle of the mobile nodes. During 15 regular increments of each of the sweeps, which have a range of 60◦, the base node reads and then records the light intensity, associating it with the corresponding orientation of the transceiver at that particular instant. After each sweep, the intensities are processed through a linear filter to smooth out any irregularities in the intensities such as spikes or dips. The angle associated with the median of the group of highest intensities is used as the measured angle of that sweep. This last process helps to mitigate problems associated with moderate light saturation, where a significant span of angles have the same or very similar intensity levels, which obscures the correct angle associated with the direction of the mobile node. After all four sweeps, the angles from each of the sweeps that have an intensity greater than a pre-determined amount are averaged and used as that base node’s measurement for that scan. The predetermined threshold is used to determine that the intensity belongs to the mobile node’s light and not the lights of the surrounding room. If none of the angles from the sweeps has an intensity greater than the threshold, then the base node is unable to the report a measured angle and no position can be measured for that iteration. In addition to requiring that both base nodes produce an observed angle, the observed position is only computed when the observed angles make the following relationship about them true: 2 > 1, 1 > 2, 0◦ < 1, 2 ≤ 180◦ 180◦ < 1, 2 ≤ 360◦ (2.16) These relationships help to ensure that the collected angles actually converge to a point on the same half of the -axis as where the scan has taken place. 2.4 Simulation Prior to testing the approach experimentally, simulation was conducted to examine the perfor- mance of the scheme, especially its robustness against measurement errors and uncertainty in the initial state estimate. 16 2.4.1 Simulation Setup The simulation environment had an area defined as  ∈ [−6, 6],  ∈ [−11, 1] in grid units to mimic the physical space of the experiment, where a grid unit is equivalent to approximately 23 cm. The base nodes, BN1 and BN2, were positioned at [−3, 0] and [3, 0] respectively. Two different trajectories for the robot were used. The first was a simple closed loop as shown in Figure 2.2, and the second was a figure 8-shaped loop as shown in Figure 2.3. Each trajectory was composed of a number position points, which were used as the ground truth positions in the simulation. In the absence of physical LED and photodiode components, a simulated version of LOS detection was developed. In particular, two nodes, A and B, have an LOS between themselves if and only if A can view the light emitted from B and B can view the light emitted from A or as rewritten as a logical expression: (  ↔ ) ⇐⇒ (  → ) ∧ ( → ) (2.17) In simulation, to determine if node A could view node B, (  → ), node A would project its field of view (FOV) as a virtual triangle over the simulation environment. Assume that node B was already directly oriented towards node A. Then under these conditions the only way node B would be able to see node A’s light was if node B was within the area of the projected FOV triangle. Node B is within the area of the triangle, if the sum of angles between the corner vectors, which point from node B’s position to the corners of the triangle, sum to 360◦. This method is illustrated in Figure 2.4, where a slotted-lined triangle is projected from the center of node A to the edge of the simulated area, the corner vectors are represented by the solid arrows stemming from the center of node B and the angles between them are labeled 1, 2, and 3. For the simulated measurement step, the orientation value supplied to the mobile node for adjusting its simulated transceiver’s direction, was obtained by finding the angle between the 0◦ orientation vector and the vector that points from the previous to the current ground truth position with an added zero-mean Gaussian noise, with a standard deviation of 0.5◦, to simulate the imperfection of a physical orientation sensor. The “measured” position in simulation is obtained by 17 Figure 2.2: Comparison of the ground truth and estimated positions for one of the simulated trials using the simple loop trajectory, for the case when the measurement angles of the base nodes are subjected to Gaussian noises with a standard deviation of 0.5◦. Figure 2.3: Comparison of the ground truth and estimated positions for one of the simulated trials using the figure-eight trajectory, for the case when the measurement angles of the base nodes are subjected to Gaussian noises with a standard deviation of 0.5◦. triangulation using the simulated measured bearing angles, and the latter are obtained by corrupting the ground-truth bearing angles with independent, zero-mean, white Gaussian noise. The amount of error introduced to these angle measurements was controlled by changing the standard deviation of the Gaussian noise. 18 -6-4-20246x position-12-10-8-6-4-20y position135791113151719212325272931333537394143135791113151719212325272931333537394143BN1BN2Ground Truth positionEstimated positionBase Nodes-6-4-20246x position-10-8-6-4-20y position135791113151719212325272931333537394143454749135791113151719212325272931333537394143454749BN1BN2Ground Truth positionEstimated positionBase Nodes Figure 2.4: Illustration of base nodes BN1 and BN2 calculating the position of the target node. 2.4.2 Simulation Results 2.4.2.1 Impact of Angle Measurement Error First the system was analyzed under different levels of angle measurement error. This was achieved by ranging the standard deviation from 0.5◦ to 5.0◦ in increments of 0.5◦. For each level of standard deviation, 100 trials were conducted. To control the randomness so it would be repeatable, a vector of 100 random seeds was chosen and used for the corresponding trial number for each of the different levels of standard deviation. Figure 2.2 and Figure 2.3 show the comparison between the ground truth positions and the corresponding Kalman filtering-based estimated positions of the robot in a sample run for the simple loop trajectory and the figure-8 trajectory, respectively, where the angle measurement error had a standard deviation of 0.5◦. Figure 2.5 shows the average estimated position error and the measured position error among all of the trials for the simple loop trajectory, under each level of standard deviation for the angle measurement error. The estimated (resp., measured) error is the magnitude of the error obtained by comparing the estimated (resp., measured) positions with the corresponding ground truth position. The estimated positions are the output positions from the 19 Figure 2.5: Average error computed among all of the simulated trials for the simple loop trajectory for varying amounts of standard deviation in the Gaussian noise added to the angular measurements of the base nodes. Figure 2.6: Bar graph showing how many of the 100 simulated trials for the simple loop trajectory were able to track the complete trajectory for each level of standard deviation of the Gaussian noise added to the angular measurements of the base nodes. Kalman filter, whereas the measured positions, z, are computed directly from the observed bearing angles. The average errors shown in Figure 2.5 were computed using the mean errors from each trial, which were obtained in each trial by averaging the estimated and measured errors from all of the steps of the trajectory the system had reached during that trial. As the standard deviation of the Gaussian noise gets larger, fewer number of trials were able to reach all of the steps of the trajectory. This is reflected in Figure 2.6, which shows for each standard deviation, how many of the 100 trials were able to reach all the steps. Similarly Figure 2.7 shows the average measured and 20 0.511.522.533.544.55STD of Gaussian noise (degrees)00.511.522.53Average Error (grid units)MeasuredEstimated0.511.522.533.544.55STD of Gaussian noise (degrees)020406080100Number of Trials Figure 2.7: Average error computed among all of the simulated trials for the figure-eight trajectory for varying amounts of standard deviation in the Gaussian noise added to the angular measurements of the base nodes. Figure 2.8: Bar graph showing how many of the 100 simulated trials for the figure-eight trajectory were able to track the complete trajectory for each level of standard deviation of the Gaussian noise added to the angular measurements of the base nodes. estimated errors from the 100 runs for the figure-8 trajectory, and Figure 2.8 shows the number of trials that were fully completed for the figure-8 trajectory. Collectively the graphs show that the system functions well when the angular measurement error has a standard deviation of 2.0◦ or less, as it allows the system to track the mobile robot for the entire trajectory with a 100% success rate. The system performance is still largely satisfactory when the standard deviation is about 3.0◦. As the standard deviation of the angle measurement error increases, both the measured and estimated position errors increase, as expected, and the 21 0.511.522.533.544.55STD of Gaussian noise (degrees)00.511.522.53Average Error (grid units)MeasuredEstimated0.511.522.533.544.55STD of Gaussian noise (degrees)020406080100Number of Trials number of trials that fail to track the full trajectory rises. We also note that the estimated position is always slightly more accurate than the measured position computed directly from the bearing angles. Finally, the localization performance for the simple loop case is largely comparable to that for the figure-8 case, with slight performance degradation for the latter, suggesting that the proposed scheme is robust to different trajectories for the mobile robot. 2.4.2.2 Impact of the Error in Initial MN Position Estimation The simulation next examined the case where the position of the mobile node initialized into the state vector had varying amount of error from the ground truth. Similar to the angle measurement error case, independent, zero-mean, white Gaussian noises were added to the  and  coordinates of the mobile node’s initial ground truth position, to obtain the initial estimate of the position. As was done in the previous case, the amount of error introduced to this initial position estimate was controlled by changing the standard deviation of the injected noise from 0.5 grid units to 3.0 grid units in increments of 0.5 grid units. Figures 2.9 and 2.10 show for each level of the standard deviation, the corresponding numbers of trials completing all steps of the trajectory for the simple loop and figure-8 trajectories, respectively. The trend from these figures indicates that, with an increased error in the initial position estimate, the number of runs completing the full trajectory drops. We note that the system does not have a 100% success rate completing the full trajectories even when the standard deviation is as low as 0.5 grid units. Analysis of simulation data indicates that the latter was caused by the relatively big (larger than 1) realizations of the random variable for those runs. 2.4.2.3 Justification of Gaussian Noise in Position Measurement Error The measurement model, Eq. (4.5), assumes a Gaussian noise in the measurement of robot location. This assumption, along with the assumptions made on the process noise, facilitated the use of the Kalman filter for position prediction and estimation. The physical implementation of the position measurement, of course, is through triangulation using the measured bearing angles. Next we justify 22 Figure 2.9: Bar graph showing how many of the 100 simulated trials for the simple loop trajectory were able to track the complete trajectory for each level of standard deviation of the Gaussian noise added to the initial position of the mobile node. Figure 2.10: Bar graph showing how many of the 100 simulated trials for the figure-eight trajectory were able to track the complete trajectory for each level of standard deviation of the Gaussian noise added to the initial position of the mobile node. the assumption in (4.5) by evaluating via simulation the measured position error distribution, based on the statistics of error in the bearing angle measurement. In particular, the simulation examined the error distribution in the computed position of the mobile node when the ground truth angular measurements were corrupted with Gaussian noise and uniformly distributed noise. In simulation the same set of 1,000 random location points, uniformly distributed throughout the simulation area, were used. For each of these location points, 100 samples of the position error were computed based on the error in bearing angle measurements. Figure 2.11 shows the resulting position 23 0.511.522.53STD of Gaussian noise (grid units)020406080100Number of Trials0.511.522.53STD of Gaussian noise (grid units)020406080100Number of Trials Figure 2.11: Error distribution of the mobile node’s position in terms of  and  when the angular measurements are corrupted with Gaussian noise of zero-mean and a standard deviation of 1.96◦. Figure 2.12: Error distribution of the mobile node’s position in terms of  and  when the angular measurements are corrupted with noise that is uniformly distributed between [−3.43◦, 3.43◦]. error distribution when the bearing angle measurement was corrupted with a Gaussian noise with a standard deviation of 1.96◦, while Figure 2.12 shows the resulting position measurement distribution when the angle measurement was corrupted with a uniform noise randomly distributed between [−3.43◦, 3.43◦]. It can be seen that these distributions resemble well the Gaussian distributions. In addition, treating the position measurement error as Gaussian is also supported by the effectiveness of the proposed localization scheme in both simulation results in this section and experimental results in the next section. 24 -6-4-20246x position error (grid units)0500100015002000frequency of error-6-4-20246y position error (grid units)0200400600800frequency of error-6-4-20246x position error (grid units)050010001500frequency of error-6-4-20246y position error (grid units)0200400600800frequency of error 2.5 Experiment 2.5.1 Setup The transceiver for each node consisted of a single CREE XRE 1 Watt Blue LED (transmitter) and a Blue Enhanced photodiode (receiver) mounted on a circular PCB board that housed the transceiver circuitry developed by Al-rubaiai in [46]. For the transceiver PCB to achieve 360◦ rotation, it was connected to the shaft of a stepper motor, which extended through the hollow center of a slip ring, allowing the wiring between the PCB circuit and the embedded controller to rotate freely with the motor. The motor and slip ring are mounted together via a 3D-printed base structure. Figure 2.13a illustrates the locations of these components on each node. The main processing unit for each node was an Intel(cid:114) Edison Board with an Arduino(cid:114) Expansion Board. It controlled the rotation of the stepper motor, transmission and reception of the LED signals as well as the processing of the Kalman filter data. The Intel(cid:114) Edison Boards had a 500 MHz Intel(cid:114) Atom dual-core processor with 1 GB of DDR3 RAM, and a built-in dual-band 2.4 GHz and 5 GHz Broadcom(cid:114) 43340 802.11 a/b/g/n Wi-Fi adapter. The stepper motor was controlled through a Sparkfun(cid:114) Big Easy Driver, with the step resolution set to a rate of 0.225◦/step. The orientation of the transceiver, was determined by keeping count of the number steps rotated and converting back and forth to degrees when needed. The output of the photodiode connected on the circular PCB had an approximate range of intensities between 0.0 − 7.45 V. However, because the Analog to Digital (A2D) circuitry of the Intel(cid:114) Edison Board could only read from 0.0− 5.0 V, an external potentiometer was used to scale down the maximum output of the photodiode so that the range of values entering the board was between 0.0 − 5.0 V. Then later in the code the values that were read from the particular A2D pin were scaled back up to the original range of 0.0 − 7.45 V. For the mobile node, its 3D-printed base was mounted on the top of a Lynxmotion(cid:114) Aluminum 4WD1 Rover Kit, whereas for the base nodes, the same 3D-printed part was mounted to a 80/20(cid:114) metal beam that was 64 in. (1.63 m) long. Mounting the base nodes onto the metal beams helped 25 (a) (b) (c) Figure 2.13: Hardware implementations used for the experiments in this work. a) Hardware components on each of the nodes. b) The implementation of based nodes are mounted onto a 80/20(cid:114) metal beam and communicate data between each other using a physical UART line. c) The mobile node implementation. in maintaining their fixed position and orientation. Figures 2.13b and 2.13c illustrate the various components of the base nodes and mobile node, respectively. The Kalman filter computations were done solely on BN1. However, because BN1 and BN2 each capture their own angle data separately, BN2 needed to transmit its measurements to BN1 over a physical three-wire Universal Asynchronous Receiver/Transmitter (UART) line as shown in Figure 2.13b. In addition to exchanging angle data, the UART line was also used by BN1 to update BN2 with the state vector estimates of the MN that had been computed from the Kalman filter which allowed BN2 to update its search angle accordingly. 26 The mobile node moved around using four 12V DC gear head motors, which had a gear reduc- tion of 50:1 and a no-load RPM output of 120. The motor outputs were controlled by a Dimension Engineering(cid:114) Sabertooth 2×32 Dual Motor Driver, which received simulated radio control (RC) servo pulses from an BotBoarduino microcontroller, which was an Arduino(cid:114) Duemilanove micro- controller variation made specifically for Lynxmotion(cid:114) robots. The Intel(cid:114) Edison Board would instruct the BotBoarduino when to send the next set of RC commands to the Sabertooth motor driver, by using the high and low states of the two general purpose input-output (GPIO) pins connected between them as go and stop flags for turning and forward motions. The value for the systems’ measurement noise covariance matrix, R, was calculated prior to the experiments by having the system try scanning the angles of the mobile node’s position while the mobile node remained at a fixed location. Comparing the base node’s measured position against this fixed position of the mobile node, errors for the  and  coordinates were computed and then used to generate a covariance matrix using the formula  R, R, R, R,    1  =1 R = = ( ˜ − )2 =1 ( ˜ − )( ˜ − ) ( ˜ − )( ˜ − ) ( ˜ − )2  =1  =1 (2.18)  where  is the total number of measurements the base nodes captured, ˜ and ˜ are the magnitudes of the errors for the  and  coordinates, respectively, and  and  are the average errors among all of the captured measurements for  and , respectively. All of the experiments were contained within a grid structure laid out on the floor with blue painters tape which followed the grout line of the tiles. The side length of each tile’s grout was approximately 23 cm and was used to represent 1 grid unit which was the generic unit of length used in the experiments to measure motion and position, see Figure 2.14. The area used in the experiments was defined as  ∈ [−6, 6],  ∈ [−11, 1] in grid units, where the -axis and -axis run parallel and perpendicular to the metal beam holding the base nodes, respectively. 27 Figure 2.14: Overhead view of the grid floor used in experiments. In the image the base nodes are connected to 80/20(cid:114) metal beam that was 64 in. (1.63 m). The positive  and  axes run from left to right and down to up respectively. The mobile node’s orientation needed to be measured to compute the required rotation for the transceiver to establish the LOS by properly accommodating the rotation of the robot itself. The orientation was initially intended to be obtained via the magnetic compass of an Inertial Measurement Unit (IMU) mounted on the robot; however, the testing environment was surrounded by many sources of static magnetic interference and despite several attempts at calibrating and offsetting for these interferences, the resulting orientation output was too inconsistent and inaccurate to be useful in this application. We make an assumption that the magnetic compass approach will be more applicable when experimenting in an underwater environment. Consequently, for the experimental results shown here, the orientation data was captured with the use of NaturalPoint(cid:114)’s OptiTrack motion tracking system, which used infrared cameras placed at strategic points above the perimeter of the grid, to illuminate and then capture the locations of reflective markers attached to a rectangular sheet of non-transparent acrylic mounted on the mobile node. During the experiments OptiTrack would stream its tracked pose data to the mobile node over Wi-Fi using Universal Datagram Protocol (UDP) packets. This pose data also included the ground truth position of the MN, which was recorded into the node’s data log purely for post processing analysis. The orientation 28 data from the motion tracking system had an average error of 0.2◦, which had negligible impact on the localization. However, in an outdoor (including underwater) environment, the orientation sensor (compass) will likely have larger errors, in which case one could use wider angle lens for the LED to increase its angular field of coverage but at the cost of its light intensity. 2.5.2 The Effect of Kalman Filter-based Position Prediction To further demonstrate the importance of Kalman filtering-based position prediction to the success of the proposed localization scheme, an alternative implementation of the system where position prediction was not used was evaluated. In this implementation, the latest measured position, instead of the predicted position, was used as the basis for the LOS establishment for the mobile node and the base nodes. In particular, the previously measured position was used to generate the angles in which the base nodes and mobile node would use for centering the scans and LED light shining, respectively. In this comparative experiment, the two versions were tasked to localize the mobile node as it traversed a short linear trajectory parallel to the base nodes, from [−3.0, −6.0] to [3.5, −6.0] in increments of 0.5 grid units in the positive  direction, a total of 13 steps. Three trials for each of the two versions were conducted. Figures 2.15 and 2.16 show the trajectory plots from one of the trials of the versions without and with the use of the predicted positions, respectively. Both figures compare the position that the system perceives to be its location against the corresponding ground truth. For each position, the corresponding step number of the trajectory is placed next to it. For the version of the system without prediction all of the trials were unable to complete all 13 steps. Consequently, the trial shown in Figure 2.15 can only show the ground truth positions for the steps at which it was able to localize, since it only receives the ground truth data with orientation angle at the start of each step of the trajectory, and the remaining points are filled in by the intended points from the designed trajectory. On the other hand, all 3 trials of the version with prediction were able to complete all 13 steps of this trajectory. 29 Figure 2.15: Trajectory plot of one of the trials where prediction of the mobile node’s position was not used for localizing. The system perceives its location to be the measured position computed from the bearing angles. Figure 2.16: Trajectory plot of one of the trials where Kalman filter-based prediction of the mobile node’s position was used for localizing. The system perceives its location to be the Kalman filtering-based estimated position. 30 -4-2024x position (grid units)-7-6-5-4-3-2-10y position (grid units)11223344556677889910111213BN1BN2Ground TruthpositionIntended TrajectoryMeasured positionBase Nodes-4-2024x position (grid units)-7-6-5-4-3-2-10y position(grid units)1122334455667788991010111112121313BN1BN2Ground Truth positionEstimated positionBase Nodes 2.5.3 Results - Localization around a Closed Loop Two experimental trials of the system, with the mobile node following a simple loop trajectory, were conducted with the goal of showing the localization accuracy of the system when it was revisiting points of the trajectory it had already traversed and localized. Figures 2.17 – 2.19 show the results obtained during one of these trials. Figure 2.17 compares the trajectory points of ground truth and estimated positions. Figure 2.18 shows the measured and estimated position errors, which are the errors between the measured (resp., Kalman filter-estimated) position and the ground truth position, for each step of the trajectory in the experiment. Both Figures 2.17 and 2.18 show that the proposed method is capable of localizing the mobile robot around the full trajectory with an error of less than 2 grid units. These figures also indicate the system has more difficulty measuring the robot’s position between the two turning portions of the trajectory in which the mobile node is the furtherest away from the base nodes. Figure 2.19 shows the measured angle error for both 1 and 2, which indicates that the measurement and filtering scheme is able to limit the measurement error within ±3◦. 31 Figure 2.17: Comparison of the ground truth and estimated positions of the experiment. Figure 2.18: Measured and estimated position error for each step of the trajectory. Figure 2.19: Measured angle error is within the limits described by the simulation analysis. 32 -6-4-20246x position (grid units)-12-10-8-6-4-20y position (grid units)135791113151719212325272931333537394143135791113151719212325272931333537394143BN1BN2Ground Truth positionEstimated positionBase Nodes159131721252933374145Trajectory Step Numbers0.511.52Position error (grid units)MeasuredEstimated159131721252933374145Trajectory Step Numbers11.52Angle error (degrees)1error159131721252933374145Trajectory Step Numbers123Angle error (degrees)2error 2.6 Summary This chapter has presented the algorithm design and system implementation for an LED-based localization scheme with a single-transmitter-single-receiver setup. A key idea exploited is the use of Kalman filtering for predicting the position of the mobile node, to facilitate the establishment and maintenance of LOS. Simulation analysis has been presented on how much error in LOS measurement and knowledge of initial location that the system can withstand and still report sufficient localization accuracy. Experimentation shows the significance of the predicted position from the Kalman filter and how that allows the system to localize dynamically. The work presented in this chapter was published in IEEE Transactions on Mechatronics [61]. Some preliminary results of the proposed approach were reported at the 2016 and 2017 ASME Dynamic Systems and Control Conferences [62, 63]. 33 SENSITIVITY-BASED DATA FUSION FOR MOBILE ROBOT LOCALIZATION CHAPTER 3 In the previous chapter a solution to Simultaneous Localization and Communication (SLAC) using an LED-based optical system was proposed. Localization was accomplished using the bearing angles needed to establish optical LOS between two base nodes (or beacons) with known positions and a mobile robot. One drawback to this approach of using two base nodes, is that, when the mobile robot is close to forming a collinear configuration with the base nodes, a singularity issue arises with the position measurement. This chapter explores the optical localization of a mobile robot using a group of (more than two) base nodes, as means of address the singularity limitation of the two-base-node method by allowing for alternative base-node pairings when one or more pairings are in a collinear configuration with the robot. Important to the design of this multi-base-node system is the consideration of how to best fuse the additional bearing angle information effectively. This chapter proposes the use of a sensitivity metric, which represents how sensitive a triangulated position is with respect to the bearing measurement error, to choose a pair of base nodes for triangulation that are the most robust at that time instance. In particular, the base node pairing with the lowest sensitivity metric is chosen to compute the target’s position using the corresponding measured bearings. Both the simulation and experimentation compare the performance of our proposed sensitivity metric-based approach against three alternatives methods. These three approaches include a differ- ent means of data fusion in the form of averaging the triangulated target positions computed from the bearing angles of each base node pairing, a variation to the filtering scheme by incorporating the current angle measurements in the measurement noise covariance matrix, and a combination of the averaging fusion technique with this filtering scheme variation. A fourth alternative approach is also examined in simulation, in which the captured bearing angles are used directly as the system output and are nonlinear functions of the states, thereby entailing the use of an extended Kalman filtering (EKF) scheme [57]. 34 Results show that the proposed sensitivity metric-based approach outperforms the alternative variations and achieves an average estimated position error of roughly 0.18 grid units in experi- ments, whereas the alternative averaging and filtering scheme approaches achieve average estimated position errors of approximately 0.19, 0.28, and 0.25 grid units in experiments, respectively. The alternative EKF-based approach yielded relatively poor results with the estimated velocity diverg- ing away from the ground truth fairly quickly, thereby causing the system to consequently fail at maintaining the LOS. 3.1 The Two-Base-Node Localization Approach 3.1.1 Measurement Process The approach discussed in this chapter builds upon the measurement process developed in Chapter 2. In particular, we assume each node has a photodiode receiver and an LED transmitter as components of its optical transceiver, which is able to rotate 360◦ and keep track of the changes in its orientation. As the MN shines its light at each base node, the base nodes rotate their transceivers to determine the LOS measurement with respect to the MN based on the received light intensity, thus extracting their respective bearing angles 1 and 2. Through the use of these angles and the locations of the base nodes, the mobile node’s  and  coordinates are computed using (2.1) – (2.2). As was the case before, this seemingly straightforward approach has its challenges, such as insufficient synchronization and coordination among all of the nodes causing inadequate LOS. There is also the complication of relying on pure algebraic calculations for the position (2.1), since the inherent noise in the measurement angles will lead to highly variable (instead of smooth) estimated trajectories for the mobile node MN. Again, Kalman filtering addresses this issue by exploiting the predicted positions of the MN it generates from the robot’s dynamics and the measurements computed by (2.1), to significantly reduce the effort of searching for the LOS and thus enabling efficient, accurate, and dynamic localization. The designs of the proposed position measurement-based Kalman filtering algorithm as well as 35 an alternative angle measurement-based extended Kalman filtering algorithm are presented next. 3.1.2 The Kalman Filtering Algorithms As was the case in Chapter 2, the main purpose for using Kalman filtering is to facilitate the maintenance of the line of sight between the base nodes and the mobile node, by predicting the future positions of the robot so to produce anticipated angles for the transceiver orientation. Likewise we continue to assume that the dynamics of the mobile node are captured with a constant velocity model corrupted with Gaussian noise, since precise prior knowledge of the mobile node’s movement would in general not be available to the base nodes. Potentially, alternative filtering and predictive schemes, such neural networks [64], could have been used. However, the assumption on the dynamics enables the use of computationally efficient Kalman filtering for predicting the mobile node’s coordinates. Moreover, other approaches tend to require additional overhead; for example, in the case of neural networks one needs to train the system in advance. The dynamics for the mobile node are the same from Chapter 2; however, in this chapter we have elaborated more on our system set up for the Kalman filtering. Therefore, we have included the dynamics again here for easier referencing while reading the rest of this subsection. +1 =  + Ɗ + 1, +1 =  + 2, where  = (cid:2), , ,(cid:3) and  = (cid:2), , ,(cid:3) are the velocity and position vectors of the (3.2) mobile node at the −th time instance, 1, and 2, are independent, zero-mean, white Gaussian noises, and Ɗ is the −th sampling interval. A state vector x is used in the Kalman filter and is comprised of  and  stacked together, (3.1) (3.3) (3.4) in particular: With this state vector, (3.1) and (3.2) can be rewritten as: x =(cid:2), , , (cid:3) x+1 = x +(cid:2)1, , 2,(cid:3) 36 where  is the matrix derived from the mobile node’s motion model (3.1) and (3.2):    = 1 0 Ɗ 0 1 0 0 0 0 0 0 Ɗ 0 1 0 1 The next predicted state and error covariance are generated with: ˆx−  = −1ˆx−1 ˆP−  = −1 ˆP−1  −1 + −1 (3.5) (3.6) where ˆx−1 is the previous state estimate, ˆx− covariance matrix, and  is the process noise covariance matrix.  is the a priori state estimate, ˆP−  is the predicted state This chapter will consider two Kalman filtering schemes. In the scheme that is proposed, the observation, z, is considered to be a noise-corrupted position measurement (derived from the raw bearing angle measurements). In the alternative scheme, the observation, , is a pair of noise-corrupted angles captured directly from the measurement process. 3.1.2.1 Position-based Kalman Filtering The noise-corrupted position observation, z, is computed based on (2.1): z =  + 3, , (3.7) where 3, is assumed to be a white, zero-mean Gaussian noise, and independent of the process noises 1, and 2,. The physical implementation of the position measurement (3.7) is through the triangulation of the measured bearing angles. In Chapter 2 we showed through simulation that the noise in the position measurement exhibits a Gaussian form when the noise applied to the bearing angles is uniform or Gaussian, thus justifying our use of Gaussian noise in this position measurement model. 37 This position measurement (3.7) can also be rewritten in terms of the state vector as: z+1 = x + 3, (3.8) where H is the observation matrix: H = The observation z is used in the state estimate: 0 1 0 0 1 0 0 0  (cid:16)z − H ˆx−  where K = ˆP−  H  ˆP−  H  +  ˆx = ˆx−  + K (cid:16)H ˆP = (I − K H) ˆP−  (cid:17) (cid:17)−1 (3.9) (3.10) (3.11) K is the Kalman gain,  is the covariance matrix of measurement noise, and ˆP is the posterior error covariance matrix. 3.1.2.2 Angle-based Extended Kalman Filtering For the alternative scheme, the observation, , is expressed as:  =  + 4, where 4, is assumed to be a white, zero-mean Gaussian noise, and independent of noises 1, and 2, and  =(cid:2), , ,(cid:3) consists of the two bearing angles, , and ,, associated with the pair of base nodes, BNa and BNb, used for the measurement at the −th time instance. The physical implementation of this measurement is directly the result of extracting the bearing angles from the scanned light intensities. These angles can be expressed as nonlinear functions of the 38 states: (, , ,) = = 2(, , ,) 1(, , ,)  (cid:114)(cid:0), − (cid:1)2 +(cid:16) (cid:114)(cid:0), − (cid:1)2 +(cid:16) (, − ) (, − )  (cid:170)(cid:174)(cid:174)(cid:174)(cid:174)(cid:172) (cid:170)(cid:174)(cid:174)(cid:174)(cid:174)(cid:172) (cid:17)2 (cid:17)2 , −  (3.12) ,  = ,  cos−1(cid:169)(cid:173)(cid:173)(cid:173)(cid:173)(cid:171) cos−1(cid:169)(cid:173)(cid:173)(cid:173)(cid:173)(cid:171)  2 , 1 , where(cid:2), (cid:3) and(cid:2), (cid:3) are the position vectors of BNa and BNb, respectively. This , −  nonlinear relationship between the states and the measurement requires the use of the extended Kalman filtering (EKF) scheme. Consequently, the observation matrix   is computed as the Jacobian of function  evaluated at the current predicted position [ ˆ− , ˆ− ,]: , ( ˆ− ) ( ˆ− )  1 , 2 , ( ˆ− ) 0 0 ( ˆ− ) 0 0  ˆ− ˆ− , ,    = where ˆ−  = The state estimate is then computed as: where ˆx = ˆx−  + K ( − ( ˆ)) (cid:16)G (cid:17)−1 ˆP−  G  +  K = ˆP−  G  39 (3.13) (3.14) (3.15) (3.16) ˆP = (I − K G) ˆP−  (3.17) K is the Kalman gain,  is the covariance matrix of angle measurement noise, and ˆP is the posterior error covariance matrix. 3.2 Sensitivity Metric-based Data Fusion Increasing the number of base nodes allows the system to capture multiple perspectives of the mobile node’s location; however, the challenge then becomes how to best incorporate all of the available information. In this chapter, the proposed approach is to use a sensitivity metric to evaluate the level of uncertainty in a computed position based on the level of uncertainty in the pair of measured bearing angles. In particular, this sensitivity metric can be applied to the captured angles of each base node pair in order to characterize the level of uncertainty in the resulting position for that pair. This allows for the position from the base node pair with the lowest level of uncertainty to be used as the location observation, z, for that cycle. In this work the sensitivity metric is defined in terms of the infinity norms of the Jacobians in the  and  directions, ||||∞ and ||||∞, respectively, of the measurement equation (2.1) with respect to angles 1 and 2. Rewritten to be expressed in terms of the angles, (2.1) becomes  (1, 2) and  (1, 2) where  = 1 +  = 1 +  sin 2 sin(2 − 1) cos 1  sin 2 sin(2 − 1) sin 1 (3.18) (3.19) 40 Figure 3.1: Illustration of the spatial sensitivity of the two base node measurement function, half symmetry. Figure 3.2: Illustration of the spatial sensitivity of the two base node measurement function, full symmetry. 41 -10-8-6BN1BN2-40-2-20x-axis1000-18-162-14200y-axis-124Level of Sensitivity-10300-86-68400-4-2100500BN1-10-80-6BN2-20-4100-16-2-12200x-axis-80-4Level of Sensitivityy-axis230004440086125008161020 Figure 3.3: Illustration comparing the spatial sensitivity of the two base node measurement function when the base nodes are separated by a distance of 4 and 10 grid units. Then ||||∞ and ||||∞ can be defined as: (cid:12)(cid:12)(cid:12)(cid:12)max(|sin 22| ,|sin 21|) (cid:12)(cid:12)(cid:12)(cid:12)max((cid:12)(cid:12)(cid:12)sin2 2 (cid:12)(cid:12)(cid:12)) (cid:12)(cid:12)(cid:12)sin2 1 (cid:12)(cid:12)(cid:12) , (3.20) (3.21) (cid:13)(cid:13)(cid:13)(cid:13)∞ (cid:13)(cid:13)(cid:13)(cid:13)∞   2 1   2 1 sin2(2 − 1) ||||∞ = = ||||∞ = 1 (cid:13)(cid:13)(cid:13)(cid:13)   (cid:12)(cid:12)(cid:12)(cid:12) (cid:13)(cid:13)(cid:13)(cid:13)   (cid:12)(cid:12)(cid:12)(cid:12) 1  2 =  sin2(2 − 1) These functions characterize how small changes in the measurement angles for a given pair of base nodes result in changes to the computed position. Visual representations of this uncertainty characterization is shown in Figures 3.1, 3.2, and 3.3. In Figure 3.1 the plot shows the relationship for a range of positions within  ∈ [−10, 10] and  ∈ [0,−20] for a pair of base nodes (BN1, BN2) located at [−3, 0], and [3, 0] for BN1 and BN2, respectively. In this plot the -axis indicates the level of sensitivity, calculated as the Euclidean norm of ||||∞, ||||∞. A notable observation from this illustration is that the level of sensitivity, and correspondingly the localization uncertainty, is best along the perpendicular bisector of the base nodes with the sensitivity getting worse as the mobile node gets further away from the base nodes and/or deviates 42 -20010010-16200BN283006Level of SensitivityBN2400-124y-axis5002x-axis6000-8-2BN1-4-4BN1-6-80-10BN1-BN2 distance 4.0BN1-BN2 distance 10.0 away from the bisector. Figure 3.1 also shows that the sensitivity is the highest as the mobile node gets closer to being collinear with the base nodes. In Figure 3.2 this uncertainty relationship is shown to be symmetric about the collinear axis of the base nodes. Figure 3.3 shows how the spatial sensitivity changes with different distances between the two base nodes BN1 and BN2. In particular, it shows how the spatial sensitivity maps when the base nodes are separated by a distance of 4 and 10 grid units, respectively. In particular, it can be seen that the level of sensitivity generally decreases as the distance between the base nodes increases, with the exception of a small area immediately in front of the base nodes, where the sensitivity is slightly lower when the base nodes are closer together. In implementation the captured angles from each pair combination of base nodes will be used to evaluate the magnitude of  = (||||∞, ||||∞), which is the sensitivity metric. The angles from the base node pairing that generates the lowest sensitivity value are used to calculate the observed position , to be used in the Kalman filtering to estimate the mobile node’s position. 3.3 Simulation The proposed minimum sensitivity-based data fusion approach uses a constant value, R, for its measurement noise covariance matrix, , which was computed in advance using data collected from hardware. It is discussed in detail in Section 3.4.1. In simulation this proposed approach is evaluated by comparing it to four alternatives methods, when each method is exposed to a range of angle measurement noises. 3.3.1 Alternative Approaches 3.3.1.1 Variable-R with Minimal Sensitivity In this approach the measurement noise covariance matrix, , is redefined in terms of the variance of the angle measurement and thus varies depending on the mobile robot’s current location. In 43 particular,  is computed via: where R = R     1   2  =   1   2   (3.22) (3.23) R is the (constant) error covariance matrix of the angle measurement, and  and  are as defined in (3.18) and (3.19), respectively. The value of  is re-computed for each step in the trajectory since it is dependent on the latest angles measured at that step. Just like in the proposed approach, the observed position z is computed from the base node pair with the minimal sensitivity metric. For consistency purposes these same angles are also used to evaluate . 3.3.1.2 Fixed-R with Averaging For this approach data fusion is achieved by averaging the triangulated positions from each base node pair. As with the proposed approach, the measurement noise covariance matrix, R, is set to the constant value, R. 3.3.1.3 Variable-R with Averaging This alternative approach uses both the averaging technique for data fusion and the variable mea- surement noise covariance matrix that depends on the angles measured from the current trajectory step. To mirror the fact that the observed position z is a blend of all 3 base node pairs, the value of  is computed as the average of each instance of (3.23) that is generated from each pair of base node angles. 44 3.3.1.4 Extended Kalman Filtering Scheme In this alternative approach, the captured bearing angles are directly used as the system output and are related to the states through nonlinear functions, thereby requiring the use of the extended Kalman Filter (EKF). Similar to the proposed approach, the measurement noise covariance matrix, R, is set to a constant value, R. However, in this case the value of R is built from the variances of the measurement angles’ noise. 3.3.2 Simulation Setup The simulation involves a network that includes 3 base nodes, even though the proposed approach applies to a network with an arbitrary larger number of base nodes. The base nodes are positioned into a configuration where the perpendicular bisector of each base node pair allows near overlapping coverage at any angle relative to the center of the configuration. In particular, the three base nodes, BN1, BN2, and BN3, are stationed at the coordinates [−3,−3], [0, 0], and [3,−3], respectively. While other configurations, in particular an equilateral triangle, of the base nodes may potentially have greater coverage capabilities, limited space in the experiment environment makes this difficult to implement while still maintaining sufficient distance between the base nodes. The trajectory used to evaluate the system is a single loop around all of the base nodes where the direction of the mobile node does not reverse, as shown in Figure 3.4. Simulation of angle measurement errors is achieved by adding independent, zero-mean, white Gaussian noises to each of the ground truth angles, corresponding to the base nodes that are able to establish LOS with the MN during the measurement sequence. By adjusting the standard deviation of the Gaussian noise, the level of angle error can be controlled. 3.3.3 Simulation Results Each of the approaches is evaluated in simulation under different levels of angular measurement error. The standard deviation of the Gaussian noise ranges from 0.5◦ to 3.0◦ in increments of 0.5◦, 45 with 200 trials conducted for each level of error. A set of 200 randoms seeds is used, one seed for each trial, to control the randomness of the simulation so it would be repeatable and consistent across the different levels of angular measurement error. Figure 3.4: Illustration of the single loop trajectory with the base nodes, BN1, BN2, and BN3, located at the coordinates [−3,−3]T, [0, 0]T, and [3,−3]T, respectively. Figure 3.5: Line graph showing the average estimated position error for four of the algorithm variations under each level of standard deviation for the angular measurements. 46 0.511.522.53STD of Gaussian noise (degrees)00.10.20.30.40.50.60.7Position Error (grid units)Fixed R, Min-sensitivityVariable R, Min-sensitivityFixed R, AveragedVariable R, Averaged Figure 3.5 compares the average estimated position error for four of the algorithm variations under each level of standard deviation for the angular measurements; the extended Kalman filtering scheme is not included in this plot. The results in the figure show that in general the proposed minimal sensitivity algorithm out performs both cases of the averaged-based fusion technique. Both cases of the minimal sensitivity algorithm show very similar performance output, with the proposed fixed-R approach showing a slight advantage over the variable-R approach as the level of standard deviation for the angle measurement error gets larger. Results from the angle-based extended Kalman filtering approach are shown in Figures 3.6 and 3.7, which compare the estimated position and estimated velocity against their ground truth counterparts, respectively. The results shown are from a single simulated trial in which the angular measurement noise had a standard deviation of 0.5◦. The figures show that the velocity estimates diverge from the ground truth fairly early-on in the trajectory, which causes the position estimates to suffer, resulting in the system failing to maintain the LOS between the mobile and base nodes any further. The latter observation indicates that this EKF approach was only able to localize until trajectory step 12 of 78, roughly 15% of the entire trajectory, which is not surprising since the stability of an EKF is not guaranteed in general. Based on this, this approach is not further tested in experiments. 47 Figure 3.6: Simulation results from one of the trials of the angle-based extended Kalman filtering approach in which the angular measurements experienced noise with a standard deviation of 0.5◦. The graphs compares the  and  coordinates of the estimated and ground truth positions for each the trajectory steps reached by the system during the trail. Figure 3.7: Simulation results from one of the trials of the angle-based extended Kalman filtering approach in which the angular measurements experienced noise with a standard deviation of 0.5◦. The graphs compares the  and  coordinates of the estimated and ground truth velocities for each the trajectory steps reached by the system during the trail. 48 1357911Trajectory Step Numbers-5.0-3.0-1.01.0x position(grid units)Ground TruthEstimated1357911Trajectory Step Numbers-7.3-7.2-7.1-7.0-6.9y position(grid units)Ground TruthEstimated1357911Trajectory Step Numbers-5.2-5.0-4.8-4.6-4.4-4.2x velocity(grid units/s)10-3Ground TruthEstimated1357911Trajectory Step Numbers-2.6-0.61.43.45.4y velocity(grid units/s)10-4Ground TruthEstimated 3.4 Experiment 3.4.1 Setup The nodes used in these experiments were equipped with the same hardware that was shown in Chapter 2; see Section 2.5 for the detailed description. One minor change to the hardware in this chapter is the use of a self-designed UART network. It enabled the angle data exchange across the three base nodes and also allowed for BN3 to orchestrate the actions of the other base nodes as well as broadcasting the updated state estimates of the MN, so that each node could search in the appropriate area for the next measurement angle. Figure 3.8 shows the experiment setup used with the mobile node and base nodes together on the grid floor. For the proposed approach, the measurement noise covariance matrix, R, is a constant value, R, that was calculated in advance of the experiments by having the system try to scan the angles of the mobile node’s position while the mobile node remained at a fixed location. The values of this matrix were found using 3 separate fixed positions, with each position selected to ensure that each base node combination equally contributed to the matrix, and with 50 measurements for each location so to best characterize the error of this 3 base node approach. The  and  errors generated from comparing the base node’s measured position against these fixed positions were then fused together in the following formula to generate the matrix.  R, R, R, R,    =1 1  R = = ( ˜ − )2 =1 ( ˜ − )( ˜ − ) 49  =1  =1 ( ˜ − )( ˜ − ) ( ˜ − )2  (3.24) where  is the total number of measurements the base nodes captured, ˜ and ˜ are the magnitudes of the errors for the  and  coordinates, respectively, and  and  are the average errors among all of the captured measurements for  and , respectively. Figure 3.8: Overhead view of the grid floor used in experiments. Estimated Position Error Mean Fixed R 0.1813 Variable R 0.1998 Fixed R 0.2817 Variable R 0.2518 Standard Deviation 0.0833 0.1272 0.2245 0.2738 Min. Sensitivity Averaged Table 3.1: Summarized experimental results from the three trials of each algorithm varia- tion. The results include the mean and standard deviation of the estimated position error magnitude. 3.4.2 Results Three experimental trials were conducted for each of the algorithm variations. Table 3.1 summarizes the performance for each algorithm variation across each of the three trials. In particular, it shows the mean and standard deviation of the estimated position error magnitude. Figures 3.9, 3.10, 3.11, and 3.12 compare the estimated and ground truth positions of the mobile node along the  and  coordinates from one of the three trials for the minimal sensitivity with fixed-R, minimal sensitivity with variable-R, averaging with fixed-R, and averaging with variable-R approaches, respectively. Collectively these results mirror the observations noticed in the simulation. In particular, both minimal sensitivity approaches having similar performance metrics, with the proposed fixed-R 50 approach showing a slight advantage with a lower mean and standard deviation of its estimated position error. Both averaging based approaches significantly under-performed as both versions were unable to track the robot to the end of the trajectory. This is because the averaging approaches, unlike the minimal sensitivity approach, are unable to effectively mitigate the effect of large position measurement errors. 51 Figure 3.9: Experimental results from one of the trials of the proposed minimal sensitivity with fixed-R approach. The plot shows the MN’s estimated position plotted against the ground truth. Figure 3.10: Experimental results from one of the trials of the alternative minimal sensitivity with variable-R approach. The plot shows the MN’s estimated position plotted against the ground truth. 52 -7-6-5-4-3-2-101234567x position (grid units)-9-8-7-6-5-4-3-2-10123456y position (grid units)13579111315171921232527293133353739414345474951535557596163656769717375771357911131517192123252729313335373941434547495153555759616365676971737577BN1BN2BN3Ground Truth positionEstimated positionBase Nodes-7-6-5-4-3-2-101234567x position (grid units)-9-8-7-6-5-4-3-2-10123456y position (grid units)13579111315171921232527293133353739414345474951535557596163656769717375771357911131517192123252729313335373941434547495153555759616365676971737577BN1BN2BN3Ground Truth positionEstimated positionBase Nodes Figure 3.11: Experimental results from one of the trials of the alternative averaging with fixed-R approach. The plot shows the MN’s estimated position plotted against the ground truth. Figure 3.12: Experimental results from one of the trials of the alternative averaging with variable-R approach. The plot shows the MN’s estimated position plotted against the ground truth. 53 -7-6-5-4-3-2-101234567x position (grid units)-9-8-7-6-5-4-3-2-10123456y position (grid units)1357911131517192123252729313335373813579111315171921232527293133353738BN1BN2BN3Ground Truth positionEstimated positionBase Nodes-7-6-5-4-3-2-101234567x position (grid units)-9-8-7-6-5-4-3-2-10123456y position (grid units)13579111315171921232527293133351357911131517192123252729313335BN1BN2BN3Ground Truth positionEstimated positionBase Nodes 3.5 Summary This chapter has presented the algorithm design and system implementation of a mobile robot LED-based optical localization system that uses a network of multiple beacon nodes to compute the coordinates of the robot. In particular, the proposed approach builds upon our previous two- beacon system which used the bearing angles needed to establish LOS communication between the beacons and the robot to compute the robot’s measured position. To optimize data fusion from multiple beacons and improve the positioning process, a sensitivity metric has been proposed which characterizes the level of uncertainty in the computed position from the measured bearing angles of the beacons. The metric is used to select the optimal pair of beacons for the measured position. This approach overcomes the limitations of the two-base-node approach and enables a high level of localization accuracy. It is important to note that while our simulations and experiments are carried out with three base nodes, the proposed approach is easily extendable to work with groups containing more base nodes. The work presented in this chapter was published in the Mechatronics journal [65], and a pre- liminary version of this was work also presented at the 2019 IEEE/ASME International Conference on Advanced Intelligent Mechatronics (AIM) [66]. 54 CHAPTER 4 DYNAMIC LOCALIZATION OF A ROBOT IN CONTINUOUS MOTION The previous chapters presented approaches to LED-based Simultaneous Localization and Commu- nication that took advantage of the line of sight (LOS) requirement in LED-based communication to extract the relative bearing between a mobile robot and two base nodes with known positions. The bearing angles were then used to triangulate the position of the mobile robot. A Kalman filter was implemented to combat the challenge of measurement noises and to allow robot position prediction to facilitate the light scan for bearing measurement. However, those approaches came with the assumption that the angles were captured at the same time, i.e., when the mobile robot was at a single location. Consequently, because scanning for the light intensity with a rotating receiver cannot capture both angles simultaneously, our implementation used a stop-and-go motion in order to ensure the robot was at a single location. However, this significantly slowed the robots movement, making it unsuitable for time-sensitive tasks. In this chapter, we propose a novel solution to LED-based localization which is capable of capturing the position of the robot while it is continuously moving. In particular, the proposed approach takes advantage of the estimated velocity from the Kalman filter, to properly correlate the two consecutive measurements of bearing angles with respect to the two base nodes for the position computation. In contrast to the previous chapters, this approach also now uses for the first time a rigid-body model to more accurately estimate the robot’s movement. The performance of this proposed dynamic-prediction approach is evaluated through simulation and experiments and is compared with the performance of an alternative approach that assumes both angle measurements to be taken at the same point, which we term the “traditional” approach in this chapter. The results show that the dynamic-prediction method is capable of localizing the robot and does consistently better than the traditional approach. 55 4.1 Overview of the LED-Based Localization Process 4.1.1 Measurement Process The approach discussed in this chapter builds upon the measurement processes developed in the previous chapters. That is all nodes have an optical transceiver which can rotate 360◦ while also maintaining the state of its orientation. The mobile node’s coordinates can then be computed using the bearing angles 1 and 2, and the known locations of the base nodes BN1 and BN2 using (2.1) and (2.2). Similarly, although it seems simple, improper LOS due to insufficient synchronization and coordination among the nodes, in addition to the inherent noise in the angle measurement error, makes the approach non-trivial. These concerns are again addressed using Kalman filtering which generates the predicted positions of the MN from the robot’s dynamics and the measurements computed by (2.1), to significantly reduce the effort of searching for the LOS and thus enabling efficient, accurate, and dynamic localization. A brief overview of the Kalman filter used in this work is discussed next. 4.1.2 The Kalman Filtering Algorithms Similar to previous chapters, the purpose for using Kalman filtering is to help establish the LOS between the mobile node and base nodes, by predicting the future state of the robot so that the base nodes and mobile node can anticipate each other’s angular locations. In previous chapters, the mobile robot was modeled as a point mass, only focusing on the changes in ,  coordinates. In this chapter, the robot’s motion is being represented as a rigid-body model, monitoring changes to both the body’s orientation and the position. This could potentially allow for the system to produce smooth trajectories of the estimated orientation to best compensate the body rotation in the alignment/scanning process. For both position and orientation, a constant (angular) velocity model corrupted with Gaussian noise is used for the mobile node’s dynamics, since in general the precise knowledge of its movement 56 is not known. These dynamics can be represented as: +1 =  + Ɗ + 1, +1 =  + 2, +1 =  + Ɗ + 3, +1 =  + 4, (4.1) (4.2) (4.3) where  = (cid:2), , ,(cid:3) and  = (cid:2), , ,(cid:3) are the position and velocity vectors of the (4.4) mobile node in terms of the  and  coordinates at the −th time instance,  and  are the body orientation angle and the body’s angular velocity, respectively,1,, 2,, 3,, and 4, are independent, zero-mean, white Gaussian noises, and Ɗ is the −th sampling interval. The observations  and  are the noise-corrupted location and orientation measurements, respectively. They are represented as:  =  + 5, ,  =  + 6, , (4.5) (4.6) where 5, and 6, are assumed to be white, zero-mean Gaussian, and independent of each other and the process noises 1,, 2,, 3,, and 4,. The measurement  is computed from (2.1) and (2.2), which is only possible in physical implementation when the bearing angles, 1 and 2, are measured by the MN at a single fixed position. The main focus of this work addresses how  can be computed when the bearing angles are captured by the mobile node at different positions due to the robot’s movement. The measurement  is obtained from an orientation sensor such as a magnetic compass. Body orientation estimation is needed for the mobile robot to compute the required rotation for the transceiver to establish the LOS, by properly accommodating the rotation of the robot itself. Two state vectors are used for Kalman filtering in this work. The first state vector, ˆx, maintains the estimate of the position and velocity, whereas the second state vector, ˆb, tracks the estimate of 57 the body orientation angle and the angular velocity. The two state vectors are defined as ˆx =(cid:2) ˆ, ˆ, ˆ, ˆ(cid:3) ˆb =(cid:2) ˆ, ˆ(cid:3) (4.7) (4.8) where [ ˆ, ˆ], [ˆ, ˆ], ˆ, and ˆ are the estimated position, velocity, body orientation angle, and angular velocity of the mobile node at the −th time instance, respectively. The equations for the implementation of the Kalman filter, which are standard [58], are omitted here for brevity. 4.1.3 Challenges with Continuously Moving Robot The traditional measurement system, (2.1) and (2.2), used in the previous chapters assumed that the bearing angles were captured when the mobile robot was at a single fixed position. Consequently, because the physical angle scanning process takes time, i.e., it is not physically possible to instan- taneously capture both angles with a rotating transceiver, it required the mobile node’s trajectory to be executed in a stop-and-go manner in order to ensure that the robot was at the same position for both angle captures. As beneficial as it is, the stop-and-go implementation is time-consuming and thus limits how quickly the robot can traverse its environment, making it unsuitable for time-sensitive tasks. In this work we propose an approach that allows the robot to localize while also moving continuously in its environment. That is, we propose an algorithm that can compute the robot’s position despite the fact that the two consecutive measurements of bearing angles, with respect to the two base nodes, are captured at different times and positions. 4.2 Proposed Approach The bearing angles, 1 and 2, are captured by the MN while it moves along its trajectory, where each angle is captured at a distinct position along this path. These spotting positions are labeled as Pa and Pb, where Pa is the position whose -coordinate is the smallest and not necessarily the position where the first bearing angle is spotted. Localization of the robot is achieved by 58 Figure 4.1: Illustration comparing the traditional approach and the proposed dynamic- prediction approach, for the case when 1 and 2 are at spots Pa and Pb, respectively. determining the coordinates of these spotting positions, and treating one of these positions as the observed location  of the robot. The concept for calculating these positions is considerably more involved than the traditional approach described in (2.1) and (2.2). To better contrast their differences, Figure 4.1 illustrates how the two approaches would determine a position given the same measured bearing angles. In particular, the diagram shows that the traditional approach would use the two angles to find a converging point at Pf, which could be significantly distant from the two ground-truth locations, Pa and Pb, where the angles were actually captured by the robot. Moreover, with access to only the bearing angles, the coordinates for Pa and Pb could be any of the points along the two edges of the triangle formed by Pf, BN1, and BN2. To determine an estimate of the positions for Pa or Pb, this work exploits the MN’s most recently estimated velocity to properly combine the two measured angles. 59 4.2.1 Measurement Equations The locations of the mobile node, Pa and Pb, where a bearing angle is measured, can be determined by solving for the  and  distances between each spot location and the base node of the correspond- ing captured angle, by using these angles along with the estimated velocity of the mobile node. For instance, in Figure 4.1, BN1 and Pa are separated from each other by  and . Similarly, BN2 and Pb are related by  and . These distances can be expressed in generalized mathematical relationships as:  = 1 + A  = 2 + B  = 1 + C  = 2 + D (4.9) (4.10) (4.11) (4.12) where ,  and ,  are the  and  coordinates of Pa and Pb, respectively, 1, 2 and 1, 2 are the  and  coordinates of BN1 and BN2, respectively, and A, B, C, and D are the sign values of the distances , ,  and , respectively. A, B, C, and D reflect where the spot locations are relative to the base nodes, and can be determined by inspecting the properties of the captured bearing angles. In particular, A and B take on the sign value of cos 1 and cos 2, respectively, and C and D take on the sign value of sin 1 and sin 2, respectively. From the relationships in (4.9) – (4.12), expressions for the distances , ,  and  can be derived as:  =  −  + BE  sin  tan  A − B tan  tan   =  tan   =  + E  =  tan  60 (4.13) (4.14) (4.15) (4.16) where,  = 2 − 1  =  cos   =  sin  +1, −1,  = (0◦ ≤  < 90◦) (−90◦ <  < 0◦) (4.17) (4.18) (4.19) (4.20) (4.21)   E = (  = +1) ∧ (C = +1) ∧ (D = +1) +1, ∨ (  = −1) ∧ (C = −1) ∧ (D = −1) (  = −1) ∧ (C = +1) ∧ (D = +1) −1, ∨ (  = +1) ∧ (C = −1) ∧ (D = −1) In these equations,  is the distance between the base nodes,  is the -distance between the spotted points, i.e., the distance from Pa to Pb, and  is the -distance between the spotted points with E being its associated sign value, which is determined from a combination of the slope, , and the sine values of the bearing angles. The variables  and  represent the inner angles of the triangles that each base node makes with its corresponding spotting point and are computed from 1 and 2, respectively, and  and  are the magnitude and angle of the Kalman filter-estimated velocity of the mobile node’s movement, respectively. To simplify the discussion, it is assumed, without loss of generality, that the base nodes are separated only along the -axis. By using the two sets of relationships, (4.9) – (4.12) and (4.13) – (4.16), the position of Pa (or Pb) can be computed and then used in the Kalman filter’s state estimate update. The above relationships, (4.9) – (4.16), are developed from the situation shown in Figure 4.1, where 1 and 2 are captured at spots Pa and Pb, respectively. In the case where 1 and 2 are 61 Figure 4.2: Illustration of the moving measurement approach, which shows the situation when 1 and 2 are at spots Pb and Pa, respectively. alternatively captured at spots Pb and Pa, respectively, as is illustrated in Figure 4.2, equations (4.9) – (4.16) would be simply adjusted to reflect the new association between the angles spot positions and the base nodes. 4.3 Simulation Simulation of the proposed dynamic-prediction approach was conducted, with its performance compared to the traditional approach of computing the measured position. In particular, the robustness of both approaches were tested against varying levels of measurement error in the body orientation, as well as varying levels of the MN’s velocity. Both factors have influence on the position measurement. The former results in error in the bearing angle measurement, whereas the latter affects the distance between the two consecutive measurements of bearing angles. 62 4.3.1 Simulation Setup The robot was evaluated on a straight line trajectory starting at [−7, −6] and ending at [9, −6], to mimic the trajectory of the MN in experiments as shown in Figures 4.3 and 4.4. The ground truth positions were determined by advancing the robot’s position from the starting point to the end point by repeatedly applying the ground truth velocity in small increments of time. The time increment should be small enough to allow for the ground truth position at any time instance to be determined. Base nodes BN1 and BN2 were positioned at [−3, 0] and [3, 0], respectively. 4.3.2 Simulation Measurements The simulated robot body orientation measurement was generated by adding zero-mean Gaussian noise to the ground truth orientation value. The simulated ground truth body orientation of the mobile node was obtained by finding the angle between the 0◦ orientation vector and the vector that points from the previous to the current ground truth position. The amount of error in the orientation measurement was controlled by adjusting the standard deviation of the Gaussian noise. The Kalman filter-estimated body orientation (based on the body orientation measurement) was used to adjust the mobile node’s transceiver angle in order to establish LOS by properly accommodating the robot’s own rotation. The error in the estimated orientation angle had the effect of eschewing the scanned light intensities thus resulting in errors in the angle error. Angle measurements were generated by simulating the process of the MN scanning the light intensities shown by the base nodes. The range of the mobile node’s scan was the angular distance between the predicted angles, ˆ1,+1 and ˆ2,+1, computed in (2.12), plus an additional 30◦ in the opposite direction of each angle. The scan resolution was set to a step size of 0.225◦, to mimic the rotation resolution of the stepper motor used in the hardware implementation of our previous works. The amount of time that elapsed between the steps of the scan was determined by averaging the amount of time that elapsed between steps in hardware trials. 63 Figure 4.3: Comparison of the ground truth and estimated positions for one of the simulated trials using the proposed dynamic-prediction approach, for the case when the body orientation measurement angle was subjected to Gaussian noises with a standard deviation of 1.0◦ and the MN’s velocity was 0.27 (grid units/s). The timestamp associated with every other estimated position is shown slightly above the corresponding markers to indicate the progression of time. Figure 4.4: Comparison of the ground truth and estimated positions for one of the simulated trials using the traditional approach, for the case when the body orientation measurement angle was subjected to Gaussian noises with a standard deviation of 1.0◦ and the MN’s velocity was 0.27 (grid units/s). The timestamp associated with every other estimated position is shown slightly above the corresponding markers to indicate the progression of time. 64 -10-50510x position (grid units)-8-7-6-5-4-3-2-101y position (grid units)BN1BN22.63 s17.82 s32.99 s49.08 sGround Truth PositionEst. Pos. Dynamic Prediction-10-50510x position (grid units)-8-7-6-5-4-3-2-101y position (grid units)BN1BN22.63 s17.77 s32.91 s49.14 sGround Truth PositionEst. Pos. Traditional The strength of the artificial light intensity was based on the degree of LOS achieved between the transceivers of the mobile node and the base nodes at each step of the mobile node’s transceiver rotation. This degree of LOS, which ranged from [0.0, 1.0] with a value of 1.0 representing direct LOS, was first scaled by 7.3 to mimic the range of voltages measured by the photodiode, and was then injected with zero-mean Gaussian noise with a standard deviation of 0.5 volts to represent the inherent error associated with the light measuring process. The bearing angles were extracted from the simulated light intensities by determining the angular position of mobile node’s transceiver at the center point of the two peaks in the intensity scan. 4.3.3 Simulation Results 4.3.3.1 Impact of Body Orientation Angle Measurement Error First the system was simulated under different levels of measurement error to the body orientation In particular, the standard deviation of the Gaussian noise ranged from 1.0◦ to 5.0◦ in angle. increments of 1.0◦, where for each noise level, 100 trials were conducted. A vector of 100 random seeds, one for each trial number, was used to ensure the randomness of each trial was repeatable. The MN’s velocity for all of these trials was 0.27 (grid units/s). The MN’s transceiver was adjusted using the estimated orientation angle, ˆ, which was influenced through Kalman filtering by the noise corrupted measured body orientation angle, . Figures 4.3 and 4.4 show the comparison between the ground truth position and the Kalman filtering-based estimated positions of the robot corresponding to the same timestamps for one of the simulated trials using the proposed dynamic-prediction and traditional measurement approaches, respectively, where the standard deviation of the Gaussian noise applied to the body orientation measurement angle was 1.0◦. Figures 4.5 and 4.6 show the mean and standard deviation of the estimated position and velocity errors, respectively, among all of the trials for (a) the traditional measurement approach and (b) the proposed dynamic-prediction measurement approach, under each level of standard deviation for the body orientation angle measurement error. The estimated 65 Figure 4.5: Plot of the mean and standard deviation of the estimated position errors computed among all of the trials for varying amounts of standard deviation in the Gaussian noise added to the body orientation measurements of the mobile node. a) Average error for the traditional approach; b) Average error for the proposed dynamic-prediction approach. Figure 4.6: Plot of the mean and standard deviation of the estimated velocity errors computed among all of the trials for varying levels of noise to the body orientation angle measurements. a) Average error for the traditional approach; b) Average error for the proposed dynamic- prediction approach. position error is the magnitude of the difference between the ground truth position and the position from the Kalman filter’s state vector ˆx =(cid:2) ˆ, ˆ(cid:3) after processing the observed position z that correspond to the same timestamp. Similarly, the estimated velocity error is the magnitude of the difference between the ground truth velocity and the velocity from the Kalman filter’s state vector 66 12345STD of Gaussian noise (degrees)Body Orientationa)0.20.40.60.8Average Position Error(grid units)0.410.480.540.600.6912345STD of Gaussian noise (degrees)Body Orientationb)0.20.40.60.8Average Position Error(grid units)0.190.300.390.490.5712345STD of Gaussian noise (degrees)Body Orientationa)00.050.1Average velocity error(grid units/s)0.0360.0500.0630.0730.08512345STD of Gaussian noise (degrees)Body Orientationb)00.050.1Average velocity error(grid units/s)0.0260.0450.0610.0780.083 ˆx =(cid:2)ˆ, ˆ(cid:3) after processing the observed position z. Figure 4.7 shows the average number of scanned measurements the MN was able to capture before either reaching the end of the trajectory or losing LOS with the base nodes. Figure 4.7: The average number of angle measurements captured during the trials across each of the different levels of noise to the body orientation angle measurements, for both the dynamic-prediction and traditional approaches. Figures 4.3 and 4.4 show that the proposed dynamic-prediction approach is more tightly aligned with ground truth positions and velocity, whereas the traditional approach seems to be weaving about the ground truth positions and is struggling to capture the true velocity of the system. Figure 4.5 shows that for each level of orientation error, the mean position error of the proposed dynamic- prediction approach is lower than the traditional approach; despite the face that the rate of change in the mean position error of the traditional approach is more shallower rate than the proposed dynamic-prediction approach, with an average rate of 0.07 (grid units/°) and 0.095 (grid units/°) error, for the traditional and dynamic-prediction approaches, respectively. Figure 4.6 initially shows a similar trend to Figure 4.5, where for most of the levels of orientation error the proposed dynamic- prediction approach does better than the traditional approach. The exception is when the standard deviation of the Gaussian noise to the body orientation is 4◦, in which case the dynamic-prediction approach does slightly worse than the traditional approach. Figure 4.7 shows that there is not particular trend, and that both approaches capture a similar number of angle measures, relative to 67 7.9807.9707.1207.4806.3206.1404.9904.7604.2604.46012345STD of Gaussian noise (degrees)Body Orientation012345678Average Number of ScansTraditionalDynamic Prediction each other, across all the levels of orientation error. Collectively, the graphs indicate that the proposed dynamic-prediction measurement approach is not only capable of localizing the mobile robot sufficiently under each level of body orientation angle measurement error but overall it seems to out perform the traditional measurement approach as well, despite both approaches capturing a similar number of measurements per trial. 4.3.3.2 Impact of the Robot’s Velocity The simulation next examined the systems performance under different velocity settings of the MN. In particular the MN’s velocity ranged from 0.17 (grid units/s) to 0.37 (grid units/s) in increments of 0.05 (grid units/s), with each velocity setting being used for 100 trials with each trial associated with a unique random seed. The standard deviation of the Gaussian noise applied to mobile node’s body orientation angle measurement during these trials was kept at 1.0◦. Figures 4.8 and 4.9 show the mean and standard deviation of the estimated position and velocity errors, respectively, among all of the trials for (a) the traditional measurement approach and (b) the proposed dynamic-prediction measurement approach, under each of the different velocity settings. Figure 4.10 shows the average number of scanned measurements the MN was able to capture before either reaching the end of the trajectory or losing LOS with the base nodes. From these figures, it can be seen that the proposed dynamic-prediction approach is able to maintain a relatively consistent level of estimated position and velocity accuracy as the robot’s velocity increases. In comparison, the traditional measurement approach shows that it has a more difficult time in terms of both position estimation and velocity estimation as the speed increases. Moreover, these graphs show that the change in the robot’s velocity has a much more dramatic effect in distinguishing the two approaches in comparison to the altering the body orientation measurement angle. 68 Figure 4.8: Plot of the mean and standard deviation of the estimated position errors computed across each of the different levels of MN velocity. a) Average error for the traditional approach; b) Average error for the proposed dynamic-prediction approach. Figure 4.9: Plot of the mean and standard deviation of the velocity errors computed across each of the different levels of MN velocity. a) Average error for the traditional approach; b) Average error for the proposed dynamic-prediction approach. 69 0.170.220.270.320.37MN Velocity (grid units/s)a)0.30.40.50.6AveragePositionError(grid units)0.300.340.410.460.530.170.220.270.320.37MN Velocity (grid units/s)b)0.150.20.25Average Position Error(grid units)0.220.190.190.210.200.170.220.270.320.37MN Velocity (grid units/s)a)0.30.40.50.6AveragePositionError(grid units)0.300.340.410.460.530.170.220.270.320.37MN Velocity (grid units/s)b)0.150.20.25Average Position Error(grid units)0.220.190.190.210.20 Figure 4.10: The average number of angle measurements captured during the trials across each of the different levels of MN velocity, for both the dynamic-prediction and traditional approaches. 70 12.8613.0010.0010.00 7.98 7.97 6.86 7.00 5.96 6.000.170.220.270.320.37MN Velocity (grid units/s)051015Average Number of ScansTraditionalDynamic Prediction 4.4 Experiment 4.4.1 Setup Most of the same hardware used for the nodes in Chapters 2 and 3 was reused for these experiments. In Chapters 2 and 3 each base node was responsible for capturing their own respective bearing angle by scanning the light shown by the MN, which had to aim its light at each base node separately, thereby increasing the amount of time needed to establish the robot’s measured position. However, for the experiments in this chapter, the setup was changed so that the MN was responsible for measuring all of the bearing angles from a single scanning sweep of the light shown by the base nodes, BN1 and BN2. Consequently, a second Intel(cid:114) Edison Board was installed on the robot’s chassis so that its two main tasks of collecting pose data from the motion tracking system and optical localization could be processed in parallel. The two Edison boards, as shown in Figure 4.11, would periodically communicate with each other so that the localization algorithm could get access to the measured orientation data from the motion tracking system. The communication between the two boards was achieved using a combination of GPIO pins as flags, to trigger an event on the pose capturing board, and unidirectional serial communication to send the pertinent data back to the board processing the optical localization. Figure 4.12 shows the experiment setup used with the mobile node and base nodes together on the grid floor. 4.4.2 Results Experimental trials using the proposed dynamic-prediction and traditional measurement approaches were conducted, with nine trials for both approaches, respectively. Table 4.1 summarizes the performance across each set of trials. In particular, it shows the mean and standard deviation of the estimated position and estimated velocity error magnitudes for each algorithm. Figures 4.13 and 4.14 show the estimated position and estimated velocity, respectively, against the corresponding ground truth over time for one of the trials which used the proposed dynamic- prediction measurement approach. Similarly, Figures 4.15 and 4.16 show the estimated position 71 Figure 4.11: The mobile node implementation used with two Intel(cid:114) Edison Boards. Figure 4.12: Overhead view of the grid floor used in experiments. and estimated velocity, respectively, against the corresponding ground truth over time for one of the trials which used the traditional measurement approach. The ground truth position shown in Figures 4.13 and 4.15 was extracted from the datalog of the motion tracking system for that trial. This ground truth position was then undersampled to compute the ground truth velocity shown Figures 4.14 and 4.16. Figure 4.19 compares the estimated position with the ground truth position for one of the trials which used the dynamic-prediction approach. Figures 4.17 and 4.18 shows the numbers of angle measurements that the system is able to scan before either reaching the end of 72 Dynamic Prediction Traditional 0.3601 0.7985 Standard Deviation 0.0677 0.2144 0.0574 0.0928 Estimated Position Error Estimated Velocity Error Mean Mean Standard Deviation 0.0066 0.0364 Table 4.1: Summarized experimental results from the trials of each algorithm. The results include the mean and standard deviation of the estimated position error magnitude and the estimated velocity error magnitude. the trajectory or losing the LOS with the base nodes. Together the table and the graphs show that the dynamic-prediction approach is able to sufficiently localize the robot whereas the traditional approach struggles. 73 Figure 4.13: Comparison of the estimated position against the ground truth position over time for one of the trials which used the dynamic-prediction approach. Figure 4.14: Comparison of the estimated velocity against the ground truth velocity over time for one of the trials which used the dynamic-prediction approach. 74 051015202530354045505560657075808590time (s)-10-7.5-5-2.502.557.510x position(grid units)Ground TruthEstimated051015202530354045505560657075808590time (s)-6.6-6.4-6.2-6-5.8-5.6-5.4-5.2-5y position(grid units)Ground TruthEstimated05101520253035404550556065707580time (s)00.10.20.30.4x velocity(grid units/s)Ground TruthEstimated05101520253035404550556065707580time (s)-0.06-0.04-0.0200.020.04y velocity(grid units/s)Ground TruthEstimated Figure 4.15: Comparison of the estimated position against the ground truth position over time for one of the trials which used the traditional approach. Figure 4.16: Comparison of the estimated velocity against the ground truth velocity over time for one of the trials which used the traditional approach. 75 051015202530354045505560time (s)-7.0092-4.5092-2.00920.49082.9908x position (gridunits)Ground TruthEstimated051015202530354045505560time (s)-7.2-7-6.8-6.6-6.4-6.2-6-5.8-5.6y position (grid units)Ground TruthEstimated051015202530354045505560time (s)00.10.20.30.4x velocity(grid units/s)Ground TruthEstimated051015202530354045505560time (s)-0.1-0.0500.05y velocity(grid units/s)Ground TruthEstimated Figure 4.17: Bar graph showing the number of scanned measurements for each trial using the proposed dynamic-prediction measurement approach. Figure 4.18: Bar graph showing the number of scanned measurements for each trial using the traditional measurement approach. 76 677775416123456789Trial Number012345678Number of Measurements221424412123456789Trial Number012345678Number of Measurements Figure 4.19: Comparison of the ground truth and estimated positions for one of the experiment trials using the proposed dynamic-prediction approach. The timestamp associated with every other estimated position is shown slightly above the corresponding markers to indicate the progression of time. 77 -10-50510x position (grid units)-8-7-6-5-4-3-2-101y position (grid units)BN1BN213.89 s30.85 s49.98 s67.54 sGround Truth PositionEst. Pos. Dynamic Prediction 4.5 Summary This chapter has presented an approach to LED-based localization of a continuously moving robot. By utilizing the estimated velocity of the mobile robot we were able to address the main challenge of measuring the robot’s position despite the bearing angles being measured at different times and positions. It was shown in simulation and experiments that the proposed dynamic-prediction approach was capable of localizing the mobile robot despite the bearing angles being captured at different times and locations. In addition, it also performed better than the traditional approach, which assumed the bearing angles were measured at a single location. Preliminary work on this proposed idea was presented in the 2020 ASME Dynamic Systems and Control Conference [67]. 78 CHAPTER 5 CONCLUSION AND FUTURE WORK 5.1 Concluding Remarks In this work we developed optical LED-based localization schemes for tracking a mobile robot. The designed system utilizes the bearing angles needed for maintaining the line of sight (LOS) for communication to triangulate the position of the robot. First, we presented a key idea in our approach, which exploits the use of Kalman filtering for predicting the position of the mobile node, to facilitate the establishment and maintenance of LOS. Simulation that analyzed the system’s performance under varying levels of error to the bearing angle measurements and the initial location of the robot was presented and showed the system’s ability to still offer sufficient localization accuracy. Experimentation then showed the significance of the predicted position from the Kalman filter and how that allows the system to localize dynamically. Second, we presented an algorithm and its implementation for best localizing a robot using a network of more than two beacon nodes. To optimize data fusion from multiple beacons and improve the positioning process, a sensitivity metric was proposed which characterizes the level of uncertainty in the computed position from the error in measured bearing angles. The metric was used to select the optimal pair of beacons for the measured position. This approach overcame the singularity limitation of the two-base-node approach and enabled a high level of localization accuracy. Simulation and experiments were presented that compared the proposed approach against three other alternative approaches, and showed that the proposed approach out performed the alternatives. Finally, we presented a design for LED-based optical localization that is capable of tracking a continuously moving robot. The key to this approach was utilizing the estimated velocity of the mobile robot to connect the bearing angular measurements of the robot that are captured at different times and positions. Simulation results were presented which analyzed the proposed approach’s 79 performance under varying levels of body orientation angle measurement error and different speed settings of the robot. Experiment results were also presented. Both the simulation and experiments showed that the proposed method is capable of localizing and had better performance over the traditional approach which assumed the bearing angles were measured at a single location. 5.2 Future Work The work presented here can be extended further in future studies. While a rigid body model has been introduced in the current work, its full usefulness has yet to be explored. In particular, future designs could explore using this model to help enhance the localization accuracy of the robot in more complicated trajectories. Another aspect that is worth exploring is to use the sensitivity metric of the multi-base node data fusion approach to help localize the continuously moving robot when it comes close to collinear relationships with some of the base nodes. Hardware design is limited by the intensity of the LED. If it is too bright the robot cannot localize close to the base nodes; and if not bright enough, the robot can only be localized a certain distance away from the base nodes. It would be desirable for future designs to improve the current hardware with a dynamically adjustable light intensity. The main inspiration of this work is an alternative localization mechanism for underwater environments. Naturally, extending this work to an underwater environment should be explored. However, experimentation in underwater environments presents numerous additional challenges on top of the concerns associated with validating the design. These overhead concerns include waterproofing the electronics, adjusting for light refraction and general light disturbances to name a few. Finally, underwater implementation would also require an extension of this approach to the 3D setting, since it would be difficult as well as impractical to maintain all of the parties at a level depth. Multiple aspects of the existing methods would need to be addressed and changed for this extension to 3D, such as the scanning procedure and measurement equations. 80 BIBLIOGRAPHY 81 BIBLIOGRAPHY [1] M. Kim and N. Y. Chong, “RFID-based mobile robot guidance to a stationary target,” Mecha- tronics, vol. 17, no. 4-5, pp. 217–229, May 2007. [2] T. R. Wanasinghe, G. K. I. Mann, and R. G. Gosine, “Decentralized cooperative localization for heterogeneous multi-robot system using split covariance intersection filter,” in Computer and Robot Vision (CRV), 2014 Canadian Conference on, May 2014, pp. 167–174. [3] K. Wang, Y. Liu, and L. Li, “A simple and parallel algorithm for real-time robot localization by fusing monocular vision and odometry/AHRS sensors,” IEEE/ASME Transactions on Mechatronics, vol. 19, no. 4, pp. 1447–1457, August 2014. [4] A. Kim and R. Eustice, “Pose-graph visual SLAM with geometric model selection for au- tonomous underwater ship hull inspection,” in Intelligent Robots and Systems, 2009. IROS 2009. IEEE/RSJ International Conference on, October 2009, pp. 1559–1565. [5] G. Zachár, G. Vakulya, and G. Simon, “Design of a VLC-based beaconing infrastructure for indoor localization applications,” in 2017 IEEE International Instrumentation and Measure- ment Technology Conference (I2MTC), May 2017, pp. 1–6. [6] M. H. Bergen, X. Jin, D. Guerrero, H. A. L. F. Chaves, N. V. Fredeen, and J. F. Holzman, “Design and implementation of an optical receiver for angle-of-arrival-based positioning,” Journal of Lightwave Technology, vol. 35, no. 18, pp. 3877–3885, Sept. 2017. [7] M. H. Bergen, F. S. Schaal, R. Klukas, J. Cheng, and J. F. Holzman, “Toward the im- plementation of a universal angle-based optical indoor positioning system,” Frontiers of Optoelectronics, vol. 11, no. 2, pp. 116–127, Jun. 2018. [8] A. F. Browne and S. T. Padgett, “Novel method of determining vehicle cartesian location using dual active optical beacons and a rotating photosensor,” IEEE Sensors Letters, vol. 2, no. 4, pp. 1–4, Dec. 2018. [9] M. Betke and L. Gurvits, “Mobile robot localization using landmarks,” IEEE Transactions on Robotics and Automation, vol. 13, no. 2, pp. 251–263, April 1997. [10] S. I. Roumeliotis and G. A. Bekey, “Distributed multirobot localization,” IEEE Transactions on Robotics and Automation, vol. 18, no. 5, pp. 781–795, October 2002. [11] J. M. Peula, C. Urdiales, and F. Sandoval, “Explicit coordinated localization using common visual objects,” in 2010 IEEE International Conference on Robotics and Automation, May 2010, pp. 4889–4894. [12] A. Easton and S. Cameron, “A gaussian error model for triangulation-based pose estimation using noisy landmarks,” in 2006 IEEE Conference on Robotics, Automation and Mechatronics, Bangkok, Thailand, June 2006, pp. 1–6. 82 [13] J. M. Font-Llagunes and J. A. Batlle, “Consistent triangulation for mobile robot localization using discontinuous angular measurements,” Robotics and Autonomous Systems, vol. 57, no. 9, pp. 931–942, June 2009. [14] A. Easton and S. Cameron, “The resection filter: An alternative approach to filtering gaussian pose estimates with bearing observations,” in 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems, Beijing, China, October 2006, pp. 5804–5809. [15] C. McGillem and T. Rappaport, “A beacon navigation method for autonomous vehicles,” IEEE Transactions on Vehicular Technology, vol. 38, no. 3, pp. 132–139, August 1989. [16] V. Pierlot and M. Van Droogenbroeck, “BeAMS: A beacon-based angle measurement sensor for mobile robot positioning,” IEEE Transactions on Robotics, vol. 30, no. 3, pp. 533–549, Jun. 2014. [17] ——, “A new three object triangulation algorithm for mobile robot positioning,” IEEE Trans- actions on Robotics, vol. 30, no. 3, pp. 566–577, June 2014. [18] I. Shimshoni, “On mobile robot localization from landmark bearings,” IEEE Transactions on Robotics and Automation, vol. 18, no. 6, pp. 971–976, December 2002. [19] J. S. Esteves, A. Carvalho, and C. Couto, “Generalized geometric triangulation algorithm for mobile robot absolute self-localization,” in 2003 IEEE International Symposium on Industrial Electronics ( Cat. No.03TH8692), vol. 1, June 2013, pp. 346–351. [20] O. Sergiyenko, “Optoelectronic system for mobile robot navigation,” Optoelectronics, Instru- mentation and Data Processing, vol. 46, no. 5, pp. 414–428, October 2010. [21] J. C. Rodríguez-Quiñonez, O. Sergiyenko, F. F. Gonzalez-Navarro, L. C. Básaca-Preciado, and V. Tyrsa, “Surface recognition improvement in 3D medical laser scanner using Levenberg- Marquardt method,” Signal Processing, vol. 93, no. 2, pp. 378–386, February 2013. [22] L. Lindner, O. Sergiyenko, M. Rvias-López, D. Hernández-Balbuena, W. Flores-Fuentes, J. C. Rodríguez-Quiñonez, F. N. Murrieta-Rico, M. Ivanov, V. Tyrsa, and L. C. Básaca- Preciado, “Exact laser beam positioning for measurement of vegetation vitality,” Industrial Robot, vol. 44, no. 4, pp. 532–541, June 2017. [23] O. Tekdas and V. Isler, “Sensor placement for triangulation-based localization,” IEEE Trans- actions on Automation Science and Engineering, vol. 7, no. 3, pp. 681–685, July 2010. [24] C. B. Madsen and C. S. Andersen, “Optimal landmark selection for triangulation of robot position,” Robotics and Autonomous Systems, vol. 23, no. 4, pp. 277–292, July 1998. [25] J. M. Font-Llagunes and J. A. Batlle, “Mobile robot localization. revisiting the triangulation methods,” IFAC Proceedings Volumes, vol. 39, no. 15, pp. 340–345, 2006. [26] C. F. Olsen, “Probabilistic self-localization for mobile robots,” IEEE Transactions on Robotics and Automation, vol. 16, no. 1, pp. 55–66, February 2000. 83 [27] F. Giuffrida, P. Morasso, G. Vercelli, and R. Zaccaria, “Active localization techniques for mobile robots in the real world,” in Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems, Osaka, Japan, November 1996, pp. 1312–1318 vol.3. [28] K. Rahul Sharma, D. Honc, and F. Du˘sek, “Sensor fusion for prediction of orientation and position from obstacle using multiple ir sensors an approach based on Kalman filter,” in 2014 International Conference on Applied Electronics, Pilsen, Czech Republic, September 2014, pp. 263–266. [29] C. Prabha, M. H. Supriya, and P. R. S. Pillai, “Improving the localization estimates using kalman filters,” in 2009 International Symposium on Ocean Electronics (SYMPOL 2009), Cochin, India, November 2009, pp. 190–195. [30] S. Xu, Y. Ou, and X. Wu, “Learning-based adaptive estimation for aoa target tracking with non- gaussian white noise,” in 2019 IEEE International Conference on Robotics and Biomimetics (ROBIO), Dali, China, December 2019, pp. 2233–2238. [31] M. M. Rana, N. Halim, M. M. Rahamna, and A. Abdelhadi, “Position and velocity estimations of 2d-moving object using kalman filter: Literature review,” in 2020 22nd International Conference on Advanced Communication Technology (ICACT), Phoenix Park, South Korea, February 2020, pp. 541–544. [32] H. Feng, C. Liu, Y. Shu, and O. W. Yang, “Location prediction of vehicles in VANETs using a Kalman filter,” Wireless Personal Communications, vol. 80, no. 2, p. 543–559, January 2015. [33] G. Rui and M. Chitre, “Cooperative multi-AUV localization using distributed extended infor- mation filter,” in 2016 IEEE/OES Autonomous Underwater Vehicles (AUV), November 2016, pp. 206–212. [34] L. E. Emokpae, S. DiBenedetto, B. Potteiger, and M. Younis, “UREAL: Underwater reflection- enabled acoustic-based localization,” IEEE Sensors Journal, vol. 14, no. 11, pp. 3915–3925, November 2014. [35] I. F. Akyildiz, P. Wang, and Z. Sun, “Realizing underwater communication through magnetic induction,” IEEE Communications Magazine, vol. 53, no. 11, pp. 42–48, November 2015. [36] X. Tan, “Autonomous robotic fish as mobile sensor platforms: Challenges and potential solutions,” Marine Technology Society Journal, vol. 45, no. 4, pp. 31–40, July 2011. [37] B. Tian, F. Zhang, and X. Tan, “Design and development of an LED-based optical commu- nication system for autonomous underwater robots,” in Advanced Intelligent Mechatronics (AIM), 2013 IEEE/ASME International Conference on, July 2013, pp. 1558–1563. [38] P. B. Solanki, M. Al-Rubaiai, and X. Tan, “Extended Kalman filter-aided alignment control for maintaining line of sight in optical communication,” in 2016 American Control Conference, July 2016, pp. 4520–4525. 84 [39] M.-A. Khalighi, C. Gabriel, T. Hamza, S. Bourennane, P. Léon, and V. Rigaud, “Underwater wireless optical communication; recent advances and remaining challenges,” in 2014 16th International Conference on Transparent Optical Networks (ICTON), Graz, Austria, July 2014, pp. 1–4. [40] H. Brundage., “Designing a wireless underwater optical communication system,” Master’s thesis, Massachusetts Institute of Technology, 2010. [41] M. Doniec, “Autonomous underwater data muling using wireless optical communication and agile AUV control,” Ph.D. dissertation, Massachusetts Institute of Technology, 2013. [42] D. Anguita, D. Brizzolara, and G. Parodi, “Building an underwater wireless sensor network based on optical: Communication: Research challenges and current results,” in 2009 Third International Conference on Sensor Technologies and Applications, June 2009, pp. 476–479. [43] ——, “Optical wireless communication for underwater wireless sensor networks: Hardware modules and circuits design and implementation,” in OCEANS 2010, September 2010, pp. 1–8. [44] I. C. Rust and H. H. Asada, “A dual-use visible light approach to integrated communication and localization of underwater robots with application to non-destructive nuclear reactor inspection,” in Robotics and Automation (ICRA), 2012 IEEE International Conference on, May 2012, pp. 2445–2450. [45] J. A. Simpson, B. L. Hughes, and J. F. Muth, “Smart transmitters and receivers for underwater free-space optical communication,” IEEE Journal on Selected Areas in Communications, vol. 30, no. 5, pp. 964–974, June 2012. [46] M. Al-Rubaiai, “Design and development of an LED-based optical communication system,” Master’s thesis, Michigan State University, 2015. [47] P. B. Solanki, M. Al-Rubaiai, and X. Tan, “Extended Kalman filter-based active alignment control for LED optical communication,” IEEE/ASME Transactions on Mechatronics, vol. 23, no. 4, pp. 1501–1511, August 2018. [48] K. Qiu, F. Zhang, and M. Liu, “Let the light guide us: VLC-based localization,” IEEE Robotics & Automation Magazine, vol. 23, no. 4, pp. 174–183, December 2016. [49] Q. Liang, J. Lin, and M. Liu, “Towards robust visible light positioning under led shortage by visual-inertial fusion,” in 2019 International Conference on Indoor Positioning and Indoor Navigation (IPIN), Pisa, Italy, October 2019, pp. 1–8. [50] N. T. Nguyen, N. H. Nguyen, V. H. Nguyen, K. Sripimanwat, and A. Suebsomran, “Improve- ment of the VLC localization method using the extended Kalman filter,” in TENCON 2014 - 2014 IEEE Region 10 Conference, October 2014, pp. 1–6. [51] M. F. Keskin, A. D. Sezer, and S. Gezici, “Localization via visible light systems,” Proceedings of the IEEE, vol. 106, no. 6, pp. 1063–1088, June 2018. 85 [52] L. Bai, Y. Yang, C. Guo, C. Feng, and X. Xu, “Camera assisted received signal strength ratio algorithm for indoor visible light positioning,” IEEE Communications Letters, vol. 23, no. 11, pp. 2022–2025, November 2019. [53] P. Giguere, I. Rekleitis, and M. Latulippe, “I see you, you see me: Cooperative localization through bearing-only mutually observing robots,” in 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems, Vilamoura-Algarve, Portugal, October 2012, pp. 863–869. [54] J. Suh, S. You, S. Choi, and S. Oh, “Vision-based coordinated localization for mobile sensor networks,” IEEE Transactions on Automation Science and Engineering, vol. 13, no. 2, pp. 611–620, April 2016. [55] A. Concha, P. Drews-Jr, M. Campos, and J. Civera, “Real-time localization and dense mapping in underwater environments from a monocular sequence,” in OCEANS 2015 - Genova, Genova, Italy, May 2015, pp. 1–5. [56] J. Liu, S. Gong, W. Guan, B. Li, H. Li, and J. Liu, “Tracking and localization based on multi-angle vision for underwater target,” Electronics, vol. 9, no. 11, p. 1871, November 2020. [57] K. Reif, S. Gunther, E. Yaz, and R. Unbehauen, “Stochastic stability of the discrete-time extended Kalman filter,” IEEE Transactions on Automatic Control, vol. 44, no. 4, pp. 714– 728, April 1999. [58] R. E. Kalman, “A new approach to linear filtering and prediction problems,” Transactions of the ASME–Journal of Basic Engineering, vol. 82, no. Series D, pp. 35–45, 1960. [59] D. Bouvet and G. Garcia, “Improving the accuracy of dynamic localization systems using RTK GPS by identifying the GPS latency,” in Robotics and Automation (ICRA) Millennium Conference, 2000 IEEE International Conference on, April 2000, pp. 2525–2530. [60] J. Meguro, J. Takiguchi, Y. Amano, and T. Hashizume, “3D reconstruction using multibaseline omnidirectional motion stereo based on GPS/dead-reckoning compound navigation system,” The International Journal of Robotics Research, vol. 26, no. 6, pp. 625–636, June 2007. [61] J. N. Greenberg and X. Tan, “Dynamic optical localization of a mobile robot using kalman filtering-based position prediction,” IEEE/ASME Transactions on Mechatronics, vol. 25, no. 5, pp. 2483–2492, October 2020. [62] ——, “Efficient optical localization for mobile robots via Kalman filtering-based location prediction,” in Proceedings of the ASME 2016 Dynamic Systems and Control Conference, Minneapolis, MN, October 2016, DSCC2016-9917. [63] ——, “Kalman filtering-aided optical localization of mobile robots: System design and experimental validation,” in Proceedings of the ASME 2017 Dynamic Systems and Control Conference, Tysons, VA, October 2017, DSCC2017-5368. 86 [64] J. C. Rodríguez-Quiñonez, O. Sergiyenko, D. Hernández-Balbuena, M. Rivas-López, W. Flores-Fuentes, and L. C. Básaca-Preciado, “Improve 3D laser scanner measurements accuracy using a FFBP neural network with Widrow-Hoff weight/bias learning function,” Opto-electronics Review, vol. 22, no. 4, pp. 224–235, December 2014. [65] J. N. Greenberg and X. Tan, “Sensitivity-based data fusion for optical localization of a mobile robot,” Mechatronics, vol. 73, p. 102488, February 2021. [66] ——, “Optical localization of a mobile robot using sensitivity-based data fusion,” in 2019 IEEE/ASME International Conference on Advanced Intelligent Mechatronics (AIM), Hong Kong, China, July 2019, pp. 778–783. [67] ——, “Dynamic prediction-based optical localization of a robot during continuous move- ment,” in Proceedings of the ASME 2020 Dynamic Systems and Control Conference, Pitts- burgh, PA, October 2020, DSCC2020-3288. 87