DEVELOPMENT OF A POSITION SENSITIVE DEVICE AND MULTI-POSITION ALIGNMENT CONTROL SYSTEM FOR AUTOMATED INDUSTRIAL ROBOT CALIBRATION By Erick Nieves-Rivera A DISSERTATION Submitted to Michigan State University in partial fulfillment of the requirements for the degree of Electrical Engineering - Doctor of Philosophy 2013 ABSTRACT DEVELOPMENT OF A POSITION SENSITIVE DEVICE AND MULTI-POSITION ALIGNMENT CONTROL SYSTEM FOR AUTOMATED INDUSTRIAL ROBOT CALIBRATION By Erick Nieves-Rivera This dissertation proposes a novel calibration system capable of automatically calibrating industrial robots. Often, inaccurate assumptions about real links parameters or even small offsets exist in the individual robot joints that lead to errors in the internal kinematic model equation and, as a consequence, affect the accuracy of a robotic system. To solve this problem, the proposed approach introduces a completely new technique for industrial robot calibration. The proposed system consists of an industrial robot manipulator, a camera, a laser fixture attached to the robot tool center point (TCP), a PC-based interface, and a new position sensitive calibration device (PSCD). This wireless calibration device is comprised of two fixed position sensitive detectors (PSDs) tilted with an angle between them to reflect the laser line from one PSD to the other. Such a device is capable of feeding back the movement information needed to localize the TCP frame relative to the device frame. The new calibration approach is not only able to compute the joint offset parameters of the robot but is also capable of simultaneously calibrating the robot’s workpiece relationship. It was also designed to be faster, simpler and cheaper than any other methods. Throughout this dissertation, the newly developed calibration device, the principle of our calibration system and the control approach needed to achieve automation of the entire system are presented and discussed. Finally, the feasibility of the overall calibration system including device hardware, software and calibration algorithms was demonstrated with experimental results. Copyright by ERICK NIEVES-RIVERA 2013 To my entire family who has always shown me love and support in every possible way. iv ACKNOWLEDGMENTS First and foremost, I would like to express my respect, appreciation and gratitude to my advisor Dr. Ning Xi for guiding this dissertation research from its inception to its conclusion. His guidance, feedback, and high expectations have shaped this dissertation research. In addition, I want to thank my dissertation committee members: Dr. Hassan Khalil, Dr. Guoming Zhu, and Dr. Xiaobo Tan. I greatly appreciate their valuable feedback and insights throughout the whole dissertation process. I would also like to thank Dr. Percy Pierre and Dr. Barbara O’Kelly for recruiting me to the Alfred P. Sloan program at Michigan State University and giving me the opportunity of a lifetime. Both of you took a chance on me and changed the rest of my life for the better. Many others provided help and encouragement along the way. I wish to thank current and past Robotics and Automation Lab members: Yunyi Jia, and Dr. Yong Liu helped set the foundation for ideas that would evolve into this dissertation. Likewise, Jianguo Zhao, Hongzhi Chen, Ruiguo Yang, Bo Song, Chi Zhang, Liangliang Chen, Cheng Yu, John Gregory, Andres Ramirez, Jorge Cintron, and Nelson Sepulveda provided technical and motivational support during key stages of this dissertation. v My heartfelt thanks goes to my entire family, especially to my parents Virginio Nieves, Ramonita Rivera, and Sara´ Berberena as well as to my brother and sister Elvin Nieves and ı Janise Nieves whose love, encouragement and support have been constant source of strength and inspiration. Finally, I would like to thank my wife Sayra Reyes and Kyler Nieves for showing me love, understanding and support through this long, draining, and sometimes frustrating process. I would not have finished this dissertation without them, and I appreciate the sacrifices they made so I could focus and complete my research. vi TABLE OF CONTENTS LIST OF TABLES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix LIST OF FIGURES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . x Chapter 1 Introduction . . . . . . . . . . . . . . 1.1 Problem Description and Motivation . . . . 1.2 Calibration Task . . . . . . . . . . . . . . . 1.2.1 Joint Offset Calibration . . . . . . . 1.2.2 Robot Workpiece Frame Calibration 1.3 Thesis Statement . . . . . . . . . . . . . . . 1.4 Anticipated Research Contributions . . . . . 1.5 Dissertation Outline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 5 7 11 11 12 13 15 Chapter 2 Background . . . . . . . . . . . 2.1 Historical Background . . . . . . . . . 2.2 Related Work . . . . . . . . . . . . . . 2.3 Review of Previous Work . . . . . . . . 2.3.1 Camera Calibration Techniques 2.3.2 Single PSD Calibration System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 17 21 22 22 27 Chapter 3 Calibration System Overview 3.1 Local Coordinate System . . . . . . . . 3.2 Introduction to the System . . . . . . . 3.3 Calibration System Hardware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 35 36 39 Chapter 4 Data Acquisition System . . . . . . . . 4.1 Position Sensitive Detector Working Principle . 4.1.1 Lateral PSDs . . . . . . . . . . . . . . . 4.1.2 Segmented PSDs . . . . . . . . . . . . . 4.2 PSD Processing Circuit Design . . . . . . . . . 4.2.1 PSD Experimental Performance Results 4.3 Position Sensitive Calibration Device (PSCD) . 4.3.1 Graphical User Interface (GUI) . . . . . 4.3.2 Feedback Mapping and Testing . . . . . 4.3.3 PSCD Feedback Experimental Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 42 44 46 48 50 53 56 57 59 vii Chapter 5 Robot Control . . . . . . . . . . . . . . . 5.1 Controller Design Overview . . . . . . . . . . . 5.1.1 Linear Control Law . . . . . . . . . . . . 5.1.2 Image-Based Visual Servo Control . . . . 5.1.3 Laser Line Length control . . . . . . . . 5.1.4 PSD-based Servo Control (Translational) 5.1.5 PSD-based Servo Control (Rotational) . 5.2 Controller Simulation & Experimental Results . 5.2.1 Simulation Results . . . . . . . . . . . . 5.2.2 Experimental Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62 63 65 68 69 70 73 76 76 78 Chapter 6 Calibration System & Algorithms . . . . 6.1 Analysis of the Kinematics Error Model . . . . . . 6.2 Calibration Algorithms . . . . . . . . . . . . . . . . 6.2.1 Joint Offset Calibration . . . . . . . . . . . 6.2.2 Robot Workpiece Frame Calibration . . . . 6.3 Simulation and Experimental Results . . . . . . . . 6.3.1 Simulation of joint Offset Calibration . . . . 6.3.2 Simulation of Workpiece Frame Calibration 6.3.3 Experimental Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82 83 87 87 89 93 93 94 95 Chapter 7 Experimental Calibration Results 7.1 Experimental Methodology . . . . . . . . . . 7.2 Single PSD Calibration Results . . . . . . . 7.2.1 Remarks & Discussions . . . . . . . . 7.3 Proposed Dual PSD Calibration Results . . 7.3.1 Remarks & Discussions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98 99 102 105 108 109 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chapter 8 Conclusions & Remaining Investigations . . . . . . . . . . . . . . 112 8.1 Summary of Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112 8.2 Remaining Investigations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114 REFERENCES . . . . . . . . . . . . . . viii . . . . . . . . . . . . . . . . . . 115 LIST OF TABLES Table 5.1 Symbol definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65 Table 6.1 DH Parameters of the ABB IRB120 manipulator . . . . . . . . . . . 93 Table 6.2 Simulations results on joint offset calibration . . . . . . . . . . . . . 94 Table 6.3 Simulations results on workpiece frame calibration . . . . . . . . . . 95 Table 6.4 Simulations results on joint offset calibration . . . . . . . . . . . . . 97 Table 7.1 Factory motor offset values & gear ratios . . . . . . . . . . . . . . . 99 Table 7.2 Experiment set 1 Table 7.3 Offset values found by the calibration system . . . . . . . . . . . . . 103 Table 7.4 New motor side offset values . . . . . . . . . . . . . . . . . . . . . . 103 Table 7.5 Experiment set 2 Table 7.6 Offset values found by the calibration system . . . . . . . . . . . . . 104 Table 7.7 New motor side offset values . . . . . . . . . . . . . . . . . . . . . . 104 Table 7.8 Experiment summary using the single PSD calibration . . . . . . . . 105 Table 7.9 Error percentage using single PSD calibration . . . . . . . . . . . . . 105 Table 7.10 Offset values found by the calibration system . . . . . . . . . . . . . 108 Table 7.11 New motor side offset values . . . . . . . . . . . . . . . . . . . . . . 108 Table 7.12 Experiment summary using the proposed dual PSD calibration . . . 109 Table 7.13 Error percentage using the proposed dual PSD calibration . . . . . . 109 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104 ix LIST OF FIGURES Figure 1.1 Example of the classical calibration system. . . . . . . . . . . . . . . 7 Figure 1.2 FARO laser tracker. Note: For interpretation of the references to color in this and all other figures, the reader is referred to the electronic version of this dissertation. . . . . . . . . . . . . . . . . . . . 8 Figure 1.3 Single PSD-based calibration system . . . . . . . . . . . . . . . . . . 9 Figure 1.4 Schematic of a traditional joint offset calibration . . . . . . . . . . . 11 Figure 1.5 Schematic of a traditional robot workpiece frame calibration . . . . 12 Figure 2.1 Camera coordinate system . . . . . . . . . . . . . . . . . . . . . . . 23 Figure 2.2 Schematic of the single PSD calibration system . . . . . . . . . . . . 28 Figure 2.3 single PSD calibration method . . . . . . . . . . . . . . . . . . . . . 29 Figure 2.4 Single PSD experimental setup . . . . . . . . . . . . . . . . . . . . . 31 Figure 2.5 Ideal intersection point for the laser lines. . . . . . . . . . . . . . . . 32 Figure 2.6 Real intersection point for the laser lines. . . . . . . . . . . . . . . . 33 Figure 3.1 Sketch of the calibration fixture device. . . . . . . . . . . . . . . . . 35 Figure 3.2 Proposed dual PSD calibration system schematic. . . . . . . . . . . 36 Figure 3.3 ABB IRB120 calibration system implementation . . . . . . . . . . . 37 Figure 3.4 Laser pointer and camera attached in the robot TCP. . . . . . . . . 39 Figure 3.5 The position sensitive calibration device (PSCD). . . . . . . . . . . 40 Figure 4.1 Schematic of a one dimensional PSD chip. . . . . . . . . . . . . . . . 42 x Figure 4.2 a) Photograph of a lateral PSD. b) Schematic illustration. . . . . . . 45 Figure 4.3 Schematic illustration along with the photo of a segmented PSD. . . 46 Figure 4.4 Schematic of the circuit board used with segmented PSDs. . . . . . 49 Figure 4.5 Segmented PSDs physical surface layout . . . . . . . . . . . . . . . . 51 Figure 4.6 Plot of the signal recorded during sweep. . . . . . . . . . . . . . . . 51 Figure 4.7 Sweeps at identical positions but with opposite directions. . . . . . . 53 Figure 4.8 PSCD top geometrical design. . . . . . . . . . . . . . . . . . . . . . 54 Figure 4.9 PSCD internal components representation and DAS interaction. . . 55 Figure 4.10 Graphical user interface. Note: The text in this image is irrelevant . 56 Figure 4.11 Plot of the recorded sweep data. . . . . . . . . . . . . . . . . . . . . 57 Figure 4.12 Trace of beam movements along x and y directions. . . . . . . . . . 58 Figure 4.13 Trace of beam moving in a square pattern. . . . . . . . . . . . . . . 59 Figure 4.14 PSCD experimental setup. . . . . . . . . . . . . . . . . . . . . . . . 60 Figure 4.15 PSCD experimental performance results. . . . . . . . . . . . . . . . 61 Figure 5.1 Schematic of laser line length control. . . . . . . . . . . . . . . . . . 63 Figure 5.2 Robot control system block diagram. . . . . . . . . . . . . . . . . . 64 Figure 5.3 Computed-torque approach schematic. . . . . . . . . . . . . . . . . . 67 Figure 5.4 IBVS control system block diagram. . . . . . . . . . . . . . . . . . . 68 Figure 5.5 PSD1 control system block diagram. . . . . . . . . . . . . . . . . . . 72 Figure 5.6 PSD2 control system block diagram. . . . . . . . . . . . . . . . . . . 75 Figure 5.7 Robot control simulation results. . . . . . . . . . . . . . . . . . . . . 77 Figure 5.8 Image features before IBVS control. . . . . . . . . . . . . . . . . . . 79 xi Figure 5.9 Image features after IBVS control. . . . . . . . . . . . . . . . . . . . 79 Figure 5.10 Robot controller results after all stages were completed. . . . . . . . 81 Figure 6.1 The D-H model used for the ABB IRB120 kinematics . . . . . . . . 84 Figure 6.2 Robot pose and location at positions 1,2,3 and 4. . . . . . . . . . . . 96 Figure 7.1 PSCD arbitrary placement . . . . . . . . . . . . . . . . . . . . . . . 100 Figure 7.2 Single PSD calibration results . . . . . . . . . . . . . . . . . . . . . 106 Figure 7.3 Proposed dual PSD calibration results . . . . . . . . . . . . . . . . . 110 xii Chapter 1 Introduction In the modern era, the complexity of industrialization has played an important role in developing a strong economy, typically related to technological innovation in manufacturing. Manufacturing, in general, involves the development of large-scale productions utilizing industrial robots to create assembly lines. Generally, industrial robots reach high repeatability levels, and, for repetitive applications, they are able to perform such tasks successfully. Repeatability demonstrates the quality of modern robots and their precise positioning capabilities. However, it is also well-known in the robot industry that industrial robots have high repeatability but low accuracy [1]. Nevertheless, the recent demand for high accuracy applications such as welding tasks, micro assembly operations, and surgery have increased the importance of and interests in robot calibration among researchers over the last few decades. Although there have been significant improvements in terms of accuracy of the newly designed industrial robot models, for such high accuracy applications the accuracy of the robot alone is not enough. While there are several sources of inaccuracies (e.g. thermal expansions, gear errors, structural deformations, or even incorrect knowledge of link and joint parameters), the main source of inaccuracy lies in kinematic model parameter errors. The majority of kinematic parameters (e.g. arm length, link offset, and link twist angles) are related to the structural mechanics of the manipulator. Typically, those parameters will not change by much once the robot is sent from the factory and installed in manufacturing areas. However, some kinematic parameters (e.g. joint offset) might be affected by the assembly or 1 replacement of motors and encoders. According to [2] and [3], around 90% of the inaccuracy in robot positioning is due to errors on assumed initial joint values of the robot. Without an appropriate robot calibration, any robotic system will experience accuracy degradation over time. Because of this, robot calibration has been used to improve position and orientation accuracy of industrial robots by identifying inaccuracies in the kinematic model parameters in order to create a more accurate model that better fits the real robot. Numerous robot kinematic calibration approaches and complete systems have been designed in industry as well as the academia with promising methodologies to calibrate the external parameters of industrial robots. Some of them collect accurate position data of the robot tool center point (TCP) by using highly precise equipment such as Computer Numerical Controlled (CNC) machines [4], Inclinometers [5], Theodolites [6], Coordinate Measurement Machines (CMMs) [7] and laser tracking systems [8], [9]. Other methods impose physical limitations on the TCP to form a closed kinematic chain. Such methods are required to fix one or more position and orientation constraints to the TCP. This allows the system to generate an equation capable of determining a set of parameters, also known as a self-calibration system. Due to this particular advantage, self-calibration systems are investigated and analyzed more widely than any other method. Furthermore, expensive measuring devices are not required. As discussed in [10] and [11], calibration is achieved by imposing plane constraints on the TCPs positions. In [12], the authors measured the position and poses of a robot by matching the pin of the TCP to an aperture on a dime. Related results can also be found in [13] where the authors used a single end point contact constraint equivalent to a ball joint. The robot then moved to different positions that satisfied such constraints. However, those methods are still problematic due to the need for external physical contact as well as the dependence on their individual manufacturing accu2 racies. Additionally, vision-based approaches and systems have been developed and used to perform the calibration process. However, those systems suffer from lack of resolution under wide fields of view, and low serving speed due to the low frame rate cameras possess [14], [15]. Because such devices are so expensive or their procedures are time consuming, they are difficult to use extensively in manufacturing plants. For instance, a Laser Tracker System can cost more than $100,000 US dollars. Therefore, it is particularly important to develop and design a method which is both cost-effective and easy to implement, while still being able to achieve a high level of position accuracy. Among the existing robot calibration methods, our optical approach using Position Sensitive Detectors (PSDs) is one of the best choices since it promises high precision, fast response, and low computational load. A line-based method to calibrate the robot’s external parameters [16] as well as an approach to calibrate the robot’s joint offsets [17] were proposed. Both methods were developed in our lab. Such methods mainly depend on a PSD device and a single laser pointer which is attached to the TCP of the robot. Based on recorded joint angles and available forward kinematic information of the robot, the system is able to calibrate the robot. Both simulation and experimental results verified the feasibility of these proposed methods and demonstrated that the developed systems could perform robot calibrations not only during assembly but also in user environments. However, these methods are still time consuming and have not been yet integrated, therefore they must be performed separately to achieve both workpiece frame and joint offset calibrations. Moreover, two different PSD devices for each method were needed, one of which was not portable. Furthermore, it has been proven that results depend heavily on robot configurations, and at least three PSDs and two directions of laser beams are needed to calibrate the external parameters of the robot. 3 This dissertation presents a new dual PSD calibration system for industrial robot calibration. This system can be used not only to calibrate the workpiece frame relationship but also the joint offsets of the robot simultaneously, thus improving speed during the whole calibration process. The design of a new portable position sensitive calibration device (PSCD) is presented. This developed device only requires two PSDs to perform both calibration methods. The method developed also employs a focusable laser pointer attached to the TCP. During the calibration task, the beam from the laser pointer is aimed at the center of the first PSD, i.e. PSD1, with a reflection orientation towards the second PSD, i.e. PSD2. Once these two centers are found, a line that passes between them can also be found. The reflected line is a virtual linear constraint, meaning that the TCP and the laser beam are constrained to move along that reflected line. The PSCD is located in an unknown position with respect to the robot. During the whole calibration process, the procedure of aiming the laser beam at the center of each PSD only repeats twice, so the approach is simpler and less time consuming previous methods. Once the aiming process is completed, the joint angles for that particular pose of the robot are recorded. Based on this recorded data along with the kinematic model of the robot, a calibration algorithm is employed. Mathematically, the proposed method uses a non-linear iterative optimization technique to identify the robot parameters from the data recorded. Using different positions, poses and orientations, the errors are minimized. As a final step for the proposed calibration system, after all parameter errors are found, the robotic system is then compensated internally to make the system more precise and accurate. Both simulations and experiments were implemented on an ABB industrial robot (IRB120) to verify the efficiency of the proposed method as well as the feasibility of the newly developed calibration system. 4 1.1 Problem Description and Motivation Robot calibration is the process of enhancing or improving the accuracy of a robotic system by means of modifying the robot’s model and control software. In general, a calibration system consists of four different but important steps; 1. Kinematic Modeling: First, a mathematical model describing the robot’s motion and geometry have to be determined. 2. Pose Localization: At this point, an accurate measurement of the position and orientation of the TCP have to be performed in world coordinates. 3. Relationship Identification: In this step, the relationships between individual joint angles as well as individual user-defined coordinate frames are found. 4. Compensation: Finally, the kinematic model of the robotic system is compensated through the internal control software making the overall system more accurate and ready to perform extremely delicate tasks. The pose localization step is the most crucial phase toward a successful robot calibration. However, it is at this point where most state of the art systems fail or are far from being user friendly calibration tools. The topic of robot calibration has become a research field of great importance over the last decades, especially in the field of industrial robotics. The main reason for this is that the field of application was significantly broadened due to an increasing number of fully automated or robot assisted tasks to be performed. Those applications require a significantly higher level of accuracy due to more delicate tasks that need to be fulfilled (e.g. assembly in the semiconductor industry or robot assisted medical surgery). In the past, (industrial) robot 5 calibration had to be performed manually for every single robot under lab conditions in a long and cost-intensive process. Expensive and complex measurement systems had to be operated by highly trained personnel. The result of this process is a set of measurements representing the robot pose in the task space (i.e. world coordinate system) and as joint encoder values. To determine the deviation, the robot pose indicated by the internal joint encoder values has to be compared to the physical pose (i.e. external measurement data). Hence, the errors in the kinematic model of the robot can be computed and therefore compensated. These errors are inevitable and due to varying manufacturing tolerances and other sources of error (e.g. friction and deflection). They have to be compensated in order to achieve sufficient accuracy for the given tasks. Furthermore, for performance, maintenance, or quality assurance reasons, the robots may have to undergo the calibration process in constant time intervals to monitor and compensate aging effects such as wear and tear. In modern production processes, old fashioned procedures like the one mentioned above are no longer suitable. Therefore, a new method has to be found that is less time consuming, more cost effective, and involves less or even no human interaction in the calibration process. 6 1.2 Calibration Task There are many traditional calibration systems out there, from the old fashioned but reliable systems that uses physical contact, to the most state of the art systems. Figure 1.1 shows an example (image taken from [18]) of the classical calibration device as still in use today. The robot arm has to be moved manually into a position that allows the metal pin attached to the TCP to touch a second metal pin fixed in the robots task space. This has to be performed multiple times using different joint configurations to achieve the required precision. This method is both time consuming and involves a lot of manual interaction by a trained operator. Other approaches are based on non-physical contact like the ones described in [19] and [20]. These methods used a laser line as a virtual linear constraint on the TCP. Gatla et al. [21] proposed a method of virtual closed kinematic chain. In this method, a laser beam aims at a constant but unknown location on a fixed distant object, creating a virtual closed kinematic chain. However, the point of intersection of the TCP position with the line is judged by the operator. Figure 1.1: Example of the classical calibration system. 7 Among the most popular state of the art calibration systems is the FARO laser tracker, shown in Figure 1.2 below. Such a device allows to perform robot calibration accurately and is one of the most reliable systems out there able to overcome most of the problems we are currently trying to solve. However, as we mentioned before, we would like to take the calibration task to the next level, creating a simpler system, cost-effective, and more accurate than any other system yet created before, including our own previews system designs. Figure 1.2: FARO laser tracker. Note: For interpretation of the references to color in this and all other figures, the reader is referred to the electronic version of this dissertation. A single PSD device was developed in the past in our labs to overcome those problems, and was successfully implemented, this system is shown in Figure 1.3 below. However, it still lacks speed and the accuracy is not sufficient. Such a system consists of a single PSD mounted on a portable device. The device is arbitrarily located on the robot workspace. The center point of the PSD is supposed to be the single-point constraint. The interface circuit was well designed and the signal tuning board can process the raw output of the laser spot on the PSD surface for two-dimension position feedback. A PC-based controller collects PSD output data without the need of any cables attached, PSD-based positioning 8 servo and calibration algorithms are applied to complete the calibration process. Through the network-based communication between the robot controller and the PC-based controller, the latter can obtain the current robot position information (task space and joint space) from the robot controller and send the control command to the robot controller as well as update the target position in real-time. Figure 1.3: Single PSD-based calibration system In this dissertation an optical approach based on two PSDs (i.e. Position Sensitive Detectors) was chosen to improve the calibration process in respect to time, reliability, and cost aspects that other methods still lack. Another important facet of improvement is to reduce or even eliminate the need for human participation in the process. This is due to the fact that it is both expensive and error-prone. As shown in [2] industrial robots in general feature good repeatability but rather poor accuracy. This is derived from the fact that the major share of error contribution is of non-dynamic nature, such as the joint zero 9 position error. These static errors could be avoided by initially calibrating the robot. For this purpose a system has to be designed and manufactured that allows it to determine the robot’s individual positioning inaccuracy. The idea of the optical system proposed is to obtain a robot specific representation such as joint encoder values for a given set of points in the robots task space. Since these points are already given in an external representation, i.e. world coordinate system (WCS), the robot base frame can be related to the task space, i.e. world frame. Hence, a higher positional accuracy can be achieved using this method. This is based on the fact that the desired pose in the task space can be directly transferred into joint variables in the robots kinematic model. Once this task is performed, the relationship between the robot base and the task space can be established and hence, the calibration operation is performed successfully. Further details of the proposed calibration system will be discussed in Chapter 3. There are two different main ways to calibrate an industrial robot, both equally useful in the calibration task; 1. Joint Offset Calibration 2. Robot Workpiece Frame Calibration Ideally one would like to have both methods performed at the same time in order to improve reliability on the calibration system, however the later in not always possible, which is the case of our previous method presented in Figure 1.3. 10 1.2.1 Joint Offset Calibration Joint Offset Calibration is the process of calculating the individual variations or error contribution of each robot’s joint, so that they can be compensated later in the controller internal kinematic model. Figure 1.4 shows a schematic of a classical joint offset calibration where the values of δ are unknown. For this particular schematic, the offsets in joint one and two are computed and inserted in the original model, so that the transformation matrix between the base frame and the TCP frame is more accurate than before. Figure 1.4: Schematic of a traditional joint offset calibration 1.2.2 Robot Workpiece Frame Calibration Robot Workpiece Frame Calibration is the process of calculating the relationship between the robot base frame and the robot workpiece frame usually in the form of a transformation matrix, so that the entire kinematic model can be compensated later. Figure 1.5 shows a schematic of a classical workpiece frame calibration where the value of the matrix w Tb is unknown. For this particular schematic, the unknown matrix is computed and inserted on 11 The calibration issue-- the model so that the transformation matrix between the base frame and the TCP frame is more accurate than before. e Tc c b Tf w Tf Te Fixture frame Robot base frame w b World frame T ? Figure 1.5: Schematic of a traditional robot workpiece frame calibration 1.3 Thesis Statement Robotics and Automation Laboratory, Michigan State 1 University This research is intended to present and describe our newly developed calibration system for industrial robots. In particular, this system or tool uses a combination position sensitive detectors and laser technology to accurately achieve the calibration task for industrial robots. Thesis Statement: By designing the most accurate, fastest, portable, reliable and costeffective system for robot calibration, we intended to revolutionize the way traditional calibration systems performs now a days. 12 1.4 Anticipated Research Contributions This section is intended to give the reader an idea about the anticipated research contributions and to eliminate any misconceptions. First of all this dissertation only covers the work of the first phase of this research. It is particularly important to understand that the presented system does not at all claim to be a finished product ready for use. In fact, it is intended to be a technical feasibility evaluation performed by Michigan State University Robotics and Automation Laboratory and funded by Asea Brown Boveri (ABB). The long term motivation for this is to be able to replace calibration technologies used today that has some undesirable side effects (discussed in Chapter 2). In particular, an alternative to old fashioned systems as well as other state of the art systems is to be found. The expectations towards this first stage of the dissertation are to design and manufacture a system consisting of PSDs (calibration fixture) and laser devices (laser fixture) and show that calibration can be performed using this method with particular advantages over any other calibration system. Anticipated Advantages: • A simple low-cost design. • Robust to external disturbances commonly found in production lines. • Easy to operate by the user. • Design of a convenient wireless portable calibration device. • Ability to be used by any industrial robot manipulator. • The fastest automatic robot calibration system yet designed. 13 • Ability to perform both, joint offset calibration and workpiece frame calibration at the same time. In order to achieve those goals, first a data acquisition system must be designed and implemented. Next, a PC-based controller strategy has to be designed, tested and implemented. Then, a calibration system has to be designed in order to identify the kinematic parameters accurately. Lastly, further experiments have to be performed in order to prove the system was able to overcome the above anticipated advantages. 14 1.5 Dissertation Outline The remainder of this dissertation is organized as follows. In Chapter 2 we review background material on robot calibration systems, previous work on robot calibration, camera calibration techniques, and some other related work. Although closely related work is presented in each of the subsequent chapters, Chapter 2 overviews high-level related work on calibration approaches. Chapter 3 provides an overview of the calibration system as well as the hardware needed to complete the calibration process. Next, Chapter 4 presents the data acquisition system needed to provide feedback to the controller. This chapter explains in more details the hardware used, the working principle of the position sensitive calibration device and its internal components. Experimental performance results are also illustrated, analyzed and discussed at the end of Chapter 4. Chapter 5 proceeds to present detailed derivations on the control design used for our calibration system. Also in Chapter 5, controller simulations and experiments are presented and discussed. Chapter 6 however, focuses on the analytical and computational aspect of the actual calibration algorithms used to find our parameter errors. Also this chapter presents the analysis of the kinematics error model, and simulation results. Chapter 7 presents the experimental results of the calibration system, including a comparison between our previous approach (Single PSD Calibration System) and the proposed calibration system. Lastly, Chapter 8 presents conclusions, summarizes the contributions, and discusses the remaining investigations for this dissertation. 15 Chapter 2 Background This chapter provides selected literature review and background information on the topic of robot calibration, which is fundamental to this dissertation. First, a brief historical review is provided. Then, related work from other areas of robot calibration is discussed. Next, we overview previous work on robot calibration, specifically those related to this dissertation. Is at this point where camera calibration techniques as well as previous work performed in our labs including the single PSD calibration approach is briefly presented and discussed. During this chapter, the advantages and limitations experimental results suggest about previous work on robot calibration will also be presented and discussed, specifically those fundamentally related to this dissertation. 16 2.1 Historical Background The exponential growth in robotics research registered around the late 1970’s was a result of successfully applying robotic systems to assembly lines in the manufacturing industry. Specifically, the automotive industry was the one with the most benefits. At that time the computer industry was also rapidly growing making computers more accessible than ever. However, integration between robotic systems and programmable computers did not come until the late 1980’s. Therefore, the predominant method for programming robotic systems at that time was by physically moving the manipulator’s TCP to each task point and recording the encoder information for each joint, so that they can be replayed later on. Likewise at that time, robotic systems were designed with low accuracy but high repeatability since this last one was the most important quality for most applications. In the 1980’s, numerous publications emerged from researchers working on robotic systems. In [22] the author has developed a succession of interesting ideas concerning representation, specifically the use of homogeneous matrices. His Denavit-Hartenberg (D-H) kinematic modeling formulation for robot path planning and control as well as his systematic use of homogeneous transformations, quickly became standards for researchers. As a result, manipulator controllers were designed and built using D-H model conventions. The growth of task space robot path planning and the introduction of sophisticated sensors such as force sensors, cameras, proximity sensors and the personal computer revolution, fueled the idea that robot manipulators will soon be able to implement fully automated factories, as well as to be key elements in many sophisticated systems involving the online interaction of large amounts of sensory data. Theoretically, those applications will require repeated use of the robot inverse kinematic model. Moreover, they introduce the need to program the 17 robot off-line to align the TCP to the new task point without previous knowledge of such task point. Is at this point that the new problem of robot accuracy starts to arise. The accuracy of a robotic system highly depends on how accurate the robot kinematic model embedded on its own controller software is. Therefore, researchers began to study the effects of geometrical parameter errors, joint axis misalignments, joint offsets, and other possible sources of error in the robot’s TCP positioning. In 1983 a major discovery was found in [23] stating that the D-H model was singular for robots with parallel joint axes. Thus, they introduced a modification to the so called DH model gaining popularity among researchers and eventually was adopted by many. Since most industrial robots are designed to be simple and user friendly, i.e., to have perpendicular and parallel joint axes, the singularity problem represents a major issue among researchers. During the mid 1980’s robot calibration was born as a new research area in order to improve the accuracy on robotic systems. To improve the accuracy of a robot it is necessary to measure the world coordinates of the robot’s TCP at different joint space configurations or robot poses and record each joint value at those configurations. From a practical point of view, the robot task point at each configuration computed by the nominal robot’s kinematic model will always differ from the actual TCP absolute position and orientation measurements, it all depends on what level of accuracy you are looking for. Therefore, if those two measurements differed, the nominal robot’s kinematic model needs to be suitably modified. As more accurate demands for robots in the manufacturing and surgical industries arises, the need of robot calibration becomes a crucial and absolutely necessary element to enhance robot accuracy. In other words, the more demands for higher level of accuracy, the more important robot calibration became. Many robot calibration studies during the late 1980’s were done by numerous researchers. 18 In particular, the paper presented in [24] presents a systematic method to compute the identification jacobian, which is the matrix relating the TCP pose errors with the robot kinematic parameter errors. In 1988 two techniques for accuracy compensation were described also in [25]. As robot calibration research expanded, a four step problem was featured in [26] consisting of modeling, measurement, identification and compensation. In the early 1890’s that work was expanded into a full scope book [1]. The book features one of the first tutorials to the fundamental concepts and methodologies involved in robot calibration. During the 1990’s many more robot calibration studies were done by researchers, however some key problems and questions still persisted that even today are not fully solved and answered, such as: • How an optimal robot measurement configuration can be chosen? • Is observability of kinematic errors related to the selection of configurations? • Should robotic systems be designed differently to accommodate online calibration capabilities? • Is it possible to perform robot calibration cheaper and yet faster? • What current technology can be used to develop a calibration system economically feasible? • Can the entire process of robot calibration be entirely automated? • Can such a system be easy to operate by the user? In 2006 a book was released ([27]) documenting and addressing some of these research issues using a robot calibration method aided by cameras, in fact most of our research is 19 inspired by that book where most of the work was done in the late 1990’s. In 2009 we began to investigate and develop our new calibration system which essentially answers the rest of those issues and questions. 20 2.2 Related Work From a data collection point of view robot performance evaluation and robot calibration are similar research areas. Robot performance evaluation assess repeatability as well as accuracy of robotic systems. Many researchers focused on evaluation of machine tools and robot manipulators at the National Bureau of Standards. An excellent survey about robot end-joint sensing techniques can be found in [28]. In fact, one of the first major reports of actual robot calibration experiments was the paper in [29]. Data collection was performed by them using theodolites. Many robot testing studies were performed by a wide range of measurement techniques, from expensive Coordinate Measuring Machines (CMM) and tracking interferometer systems to the other employing customized fixtures. The main idea was the measurement in world coordinates of one point on the robot TCP. Often those coordinates are defined based on the calibration equipment. Measured point represents the TCP position and the measurement of more than three coordinate points provides both position and orientation of the TCP. On the other hand, camera calibration is a key element related to this dissertation. In camera calibration the biggest issue is the difficult tradeoff between camera resolution and camera field of view. To solve this problem the camera should always remain close to the moving robot target. Therefore, it is necessary to move the cameras together with the robot TCP. Experimental results were reported in [30] showing and demonstrating that robot calibration using moving cameras was feasible. The main idea is that the camera attached to the robot TCP will continue to be calibrated with respect to a fictitious camera calibration fixture that moves together with the moving robot avoiding the problem of calibrating the actual camera before each calibration procedure was performed. 21 2.3 Review of Previous Work The following section will be focused on a review of the previous work on robot calibration that is specifically relevant to this dissertation. First, we present a review camera calibration techniques followed by our own previous work on robot calibration using a single position sensitive detector (PSD). The intent of this section however is to focus on those techniques that have practical relevance to the problem of robot calibration and to this dissertation, rather than provides a comprehensive review of this rich area. 2.3.1 Camera Calibration Techniques There are many different camera calibration models that can be classified into linear or nonlinear models. For the purpose of this dissertation an ideal distortion free camera model is presented to provide the reader with the basic concepts. The purpose of this model is to relate the coordinates of an object visible by the camera, to the coordinates of such object in a reference coordinate system. In this dissertation this object image is the laser beam spot visible on the surface of a calibration device, creating a point in the image frame. Let {xw , yw , zw } be the world coordinates, and {x, y, z} denote the camera coordinates, whose origin is at the optical center point O. Here z axis coincides with the optical axis, and {X, Y } denote the image coordinates with center at OI , i.e. the intersection of the optical axis z and the image plane. As shown in Figure 2.1, {X, Y } lies on a plane parallel to the x and y axes. The relationship between the world coordinates {xw , yw , zw } and the camera coordinates {x, y, z} is given by, 22     x          y  = R       z  xw    yw  + t   zw (2.1) where the rotation matrix R and translation vector t are denoted as      R=   r1 r2 r3    r4 r5 r6    r7 r8 r9 (2.2) and t = [ tx ty tz ]T 𝑥𝑤 𝑧𝑤 𝑥 𝑂𝑤 𝑢, 𝑋 𝑦𝑤 𝑂 𝑂𝐼 𝑧 𝑃𝑑 (2.3) 𝑦 𝑣, 𝑌 𝑃𝑢 𝑃 Figure 2.1: Camera coordinate system The main idea of this model is that every object point is connected to its corresponding 23 image point in a straight line through the focal point of the camera lens. Therefore, the following equation holds: u=f x z (2.4) v=f y z (2.5) here f is the focal length of the camera and (u, v) are the coordinates of the object point in the image plane. Also the image coordinates (X, Y ) are related to (u, v) as follows, X = su u (2.6) Y = sv v (2.7) where su and sv are scale factors from the camera coordinates (u, v), measured in mm, to the image coordinates (X, Y ) measured in pixels. f , su and sv are the intrinsic parameters that carry the internal information about the camera components and the interface of the camera to the vision system. Because there are two independent parameters in the set of intrinsic parameters, lets define fx ≡ f su (2.8) fy ≡ f sv (2.9) Combining equations (2.8) and (2.9) with equation (2.1), we have r xw + r2 yw + r3 zw + tx X = fx 1 r7 xw + r8 yw + r9 zw + tz (2.10) r4 xw + r5 yw + r6 zw + ty r7 xw + r8 yw + r9 zw + tz (2.11) Y = fy 24 Note that this model relates the world coordinate system {xw , yw , zw } to the image coordinate system (X, Y ). Usually the image coordinates stored in the computer memory of the vision system are not equal to the image coordinates (X, Y ). Thus, let (Xf , Yf ) be the computed image coordinates an arbitrarily selected point, and (Cf , Cf ) be the computed center OI of the image coordinates in the image plane. Thus the following equation can be derived, X = Xf − C x (2.12) Y = Yf − C y (2.13) where the ideal values of Cx and Cy can be obtain by the image size parameters of the vision system. The camera parameters to be calibrated from the camera model defined by equations (2.10) and (2.11) are the independent extrinsic parameters of R and t, and the intrinsic parameters of fx , fy , Cx and Cy . The camera calibration is done by taking a set of n points which have world coordinates {xw,i , yw,i , zw,i }, i = 1, 2, ..., n which are known, and are within the field of view of the camera. Those points can be detected on the camera image at their respective image coordinates (Xi , Yi ). The camera calibration problem is to identify the unknown coefficients of the camera model given the above known data. Identification of the parameters in (2.10) and (2.11) will provide the pose of the camera in world coordinates. One of the most basic camera calibration method consist of linear least squares identification of the transformation matrix. Using this method, the model in (2.10) and (2.11) can be rewritten as a xw + a12 yw + a13 zw + a14 X = 11 a31 xw + a32 yw + a33 zw + a34 25 (2.14) a xw + a22 yw + a23 zw + a24 Y = 21 a31 xw + a32 yw + a33 zw + a34 (2.15) Here we can set a34 = 1 since the scaling of the coefficients a11 , ..., a34 does not change the values of X nor Y . Equations (2.14) and (2.15) can now be combine into the model as follows:      a11    0 0 0 −Xxw −Xyw −Xzw   .   X   xw yw zw 1 0   .  =   (2.16)  .   0 0 0 0 xw yw zw 1 −Y xw −Y yw −Y zw  Y a33   The minimum number of calibration points is six to be able to obtain the unknown coefficients a11 , ..., a33 using linear least squares since each data point pair {(xw,i , yw,i , zw,i ), (Xi , Yi )} contributes two equations for the unknown vector [a11 , ..., a33 ]T in (2.16). However is important to note that such points cannot be all coplanar. For instance, consider the case where zw,i = C and C = Constant, then the left hand side matrix will consist of columns 3 and 4 as well as 7 and 8 being linearly dependent and therefore singular. Moreover, the solution in equation (2.16) is not a global optimal solution of the camera calibration problem since it was derived from a distortion free model. However using the same distortion free model, if we consider the use of two cameras to obtain two sets of coefficient vectors, then is possible to compute the unknown coordinates {xw , yw , zw } of any point present in both fields of view using the point’s image coordinates. Then for the unknown point, the corresponding image points are {X A , Y A } and {X B , Y B }, 26 from here we can construct the following equation,   xw a11 − a31 X a12 − a32 X a13 − a33 X        yw   a21 − a31 Y a22 − a32 Y a23 − a33 Y zw          X − a14  =    Y − a24 (2.17) Two pairs of equations from camera A and camera B provide a 3D measurement of a point from its measured image coordinates. In terms of this dissertation this fact was important and inspired the use of two sensors instead of one to be able to reduce the computational load and increase speed in the calibration process among other advantages. Full details of these advantages will become evident in Chapter 6 and Chapter 7. 2.3.2 Single PSD Calibration System A parameter calibration approach called virtual lines-based single-point constraint (VLBSPC) was proposed and implemented in our labs to overcome most calibration problems. A comprehensive and exhaustive review of this calibration system can be found in [17] and [31]. Unlike previous calibration methods, this approach does not need any physical contact and the developed device is affordable and automated. The proposed method depends mainly on a laser pointer attached on the tool center point (TCP) of a robot and only one positionsensitive detector (PSD). A schematic of this developed system is depicted in Figure 2.2. The coordinates of the PSD on the workcell are unknown. The automated calibration procedure involves aiming the laser mounted on the robot towards the center of the PSD surface from various robot positions and orientations. Once the precise positioning is done by PSD-based servo, all the laser lines will shoot on the same point with a very small range of error and a set 27 of robot joint angles will be recorded. Based on the recorded joint angles and forward kinematics, a joint angle offset estimation method has been developed. Evidently, if offset values of all joints are zero, the intersections of every laser-line pair computed from the recorded joint angle and forward kinematics are the same point. However, if offset values of all joints are not zero, the intersections of every laser-line pair will be different points. In other words, the distribution of the intersections depends on the robot offset. An optimization model and algorithm was also formulated to identify the robot constant offset. Y End-effector frame {E} Z Adaptor Focusable laser and adapter X Portable device Laser beam Robot Z Circuit board Y X Robot base frame {B} Battery power PSD Robot controller USB A/D LAN USB wireless hub PC-based controller USB wireless Figure 2.2: Schematic of the single PSD calibration system For the purpose of precise positioning on the same point, the segmented PSD was also employed for high precision feedback with a resolution of better than 0.1 µm and a PSDbased controller was designed and implemented. The center point of the PSD will function as the single-point constraint. The interface circuit was well constructed and the signal tuning board was able to process the raw output of the laser spot on the PSD surface for 28 two-dimension position feedback. A new parameter calibration approach called VLBSPC, was developed to calibrate the joint offsets. The calibration procedure, as shown in Figure 2.3, is performed by pointing the laser beam at the same point from the various positions and orientations. This point is the center point of the PSD and the coordinates of the point in the robot base frame are unknown. It is guaranteed that the laser beams shoot on the same point because the robot aims the laser at the center of the PSD through PSD-based feedback and servo. Line i Line N Line 2 zb yb xb Line 1 Laser Laser beam PSD Figure 2.3: single PSD calibration method Sets of robot joint angles are recorded during the localization. Substituting the recorded joint angle into the forward kinematics with offset error, the homogeneous transformations of end-effector fame with regard to the robot base frame are given by    nx ox ax px       ny oy ay py         n z o z az p z      0 0 0 1 29 (2.18) Note that the unknown parameters are the joint offsets. Therefore, Combining the tool parameters and the forward kinematics with offset error, one of the laser lines translated from the TCP frame to robot base frame is described by xB − xiB y − yiB z − ziB = B = B miB niB piB (2.19) where (xiB , yiB , ziB ) are the coordinates of one point of the laser line in the robot base fame and (miB , niB , piB ) the unit vector of the laser line direction in the robot base fame. Suppose N sets of joint angle are recorded after calibration. From equation 2.19 N laser lines are obtained. Let ΓLi denote the ith laser line, Pk denote the intersection or the center of the shortest distance between ΓLi and ΓLj (i = j, i, j ∈ N, k ∈ M ), and n PAve denote the mean point of the total intersections Pk (k = 1, · · · , M ). The coordinate errors of the points between Pk and n PAve are denoted as x Ψk , y Ψk , z Ψk in the x, y, z directions, respectively. The parameters δ of joint offset are identified by minimizing the total sum of the squares of the coordinate errors. δ ∗ = arg min x Ψk 2 + y Ψk 2 + z Ψk 2 δ (2.20) where M is the number of the intersections between laser lines. Note n PAve is updated during the minimization iteration process and Pk is the center of the line of the shortest distance from the lines between ΓLi and ΓLj if the two lines do not have the real intersection. The method for the non-linear optimization is iterative. For this non-linear square problem, the Levenberg-Marquardt algorithm (LMA) is used and integrated by C++ code. The algorithm finds the minimization quickly, mostly after less than 10 iterations. The optimum algorithm is a damped Gauss-Newton method based on the Jacobian J and damping parameter µ. The step hlm is defined by 30 J T J + µI hlm = −g (2.21) where g = J T Ψ and µ ≥ 0 . Figure 2.4 shows the experimental setup of the system using an ABB IRB 120 robot arm. Several experiments, including those reported on [31] using an ABB industrial robot (IRB1600) verified the effectiveness of both the proposed method and the developed system. They also suggest the system fits the need for easy to set-up, totally automated, low-cost, and high precision robot offset calibration. Figure 2.4: Single PSD experimental setup While this developed calibration system was a promising solution, further experiments suggest several problems concerning the reliability of the system. Experiments were perform 31 using 0.1 µms, 0.5 µms, and 0.01µms of error tolerance. Those experiments suggest that there is a correlation between the error tolerance and the accuracy and reliability in the solution. However, as we improve the calibration solutions by decreasing the tolerance, the real speed of the system had to be compromised, and in the case of using 0.01µms of tolerance, the system was not able to finish the calibration process. Figure 2.5 shows a magnified view of the center of the PSD in the nano level. In theory, we expect to have at least seven lines intersecting each other in one point in common. If this is true, then our calibration system will guarantee the reliability in the solution for the calibration algorithms. However, in the practical point of view, this is nearly impossible to achieve. Figure 2.5: Ideal intersection point for the laser lines. 32 Instead, Figure 2.6 shows a magnified view of the center of the PSD in reality. From this figure you can see that the lines will not intersect each other. They will carry some small errors from each other and will be determined by the error tolerance allowed by the system before moving from one pose to another. This error tolerance is represented by a 3-D sphere equal to the tolerance radius provided. The calibration system proposed by this dissertation will essentially eliminate this problem by guaranteeing a perfect intersection point from the reflection of the sensors. This advantage will become evident when we discuss the experimental results on Chapter 7. Figure 2.6: Real intersection point for the laser lines. 33 Chapter 3 Calibration System Overview This chapter presents an overview of the proposed calibration system and is intended to give the reader an idea of the hardware needed as well as a brief summary of the calibration working principle. As described in Chapter 1, the basic idea of calibration is the comparison of robot external (world coordinates) and robot internal (joint encoder values) representation of a certain location in the task space to determine the error. To perform the calibration task the external representation has to be known. One of the guiding aspects in the design process was that the system has to be portable. This indicates however, that it is not feasible to fix the calibration fixture to the workcell floor and measure its location (also error-prone). Therefore, an alternative way to determine the points relative to the world coordinate system without a fixed location in the workcell has to be found. Therefore, it is necessary to put some extra effort into the design process. A local coordinate system has to be found that better suits our requirements, also the way the robot will move according to our virtual linear constraint must be defined, and finally the hardware needed to accomplish the entire calibration process have to be developed. 34 3.1 Local Coordinate System Lets assume a local coordinate system is established on the lower right edge of the calibration fixture as shown in Figure 3.1. Since, as discussed in the overview, a fixed connection of the device in the any external world coordinate system cannot be established, another option must be considered. The world coordinate system (WCS) can basically be placed anywhere since neither the origin nor the orientation are restricted in any way. In this case, the best and also most convenient location for the WCS would be the origin of the calibration fixture coordinate system (CFCS). This way further complex transformations from CFCS to WCS are not required. Another enjoyable side-effect of this WCS position is that this source of positional error can therefore be eliminated in advance. The tradeoff for this however is that one has to come up with a way to determine the PSDs center positions relative to the calibration fixture coordinate systems (CFCS) origin Of and its coordinate axis x, y, z (see Figure 3.1), in other words, the device itself have to be calibrated before use. Furthermore, a way to relate this to the robot has to be found, too. This can be simultaneously achieved using the calibration system designed for this dissertation. Figure 3.1: Sketch of the calibration fixture device. 35 3.2 Introduction to the System Figure 3.2 shows the schematic model of the calibration system, implemented and verify by an ABB robot under lab testing as shown in Figure 3.3 This robotic system comprises of an ABB robot controller (IRC5 Compact) and a six degree of freedom (6-DOF) robot manipulator (IRB120). TCP Frame {E} 𝑋𝐸 𝑌𝐸 𝑍𝐸 Focusable laser Position 1 Position 2 Position 3 Position 4 Laser beam 2 𝑍𝐵 𝑌𝐵 Robot Base Frame {B} 𝑋𝐵 Laser beam 1 𝑍𝑝 𝑌𝑝 PSCD Frame {D} 𝑋𝑝 Figure 3.2: Proposed dual PSD calibration system schematic. In Figure 3.2 the model of the proposed robot calibration system mainly consists of a laser and adapter, a portable dual PSD calibration device, and the robotic system to be calibrated. A laser pointer is mounted on its fixture and rigidly attached to the robot TCP. A magnification of this fixture is shown in Figure 3.4. The laser beam is tuned to align 36 Figure 3.3: ABB IRB120 calibration system implementation its orientation toward the X-axis of the TCP frame. Two PSD sensors are mounted on a portable custom-built, high-precision fixture. The location of the fixture with respect to the workpiece frame {D} is known, while its location with respect to the robot base frame {B} is unknown. For the portable PSD device we adopt the segmented PSD for its high precision feedback as shown in Figure 3.5 The segmented PSD has a higher resolution than 0.1µm in theory. Even under the experimental conditions, its resolution may reach approximately 0.2µm. The protective glass of each sensor was carefully removed in order to be able to create a more pure and perfect reflection from one sensor to the other. 37 The calibration process, shown in Figure 3.2, is completed by locating the TCP and the laser pointer at four different positions (position 1-4). While the laser pointer is located at position 1 and 2, the laser beam should be aimed at the center of PSD1 and reflected off the PSD1 surface in a direction toward the center of PSD2. Similarly, the laser beam should be aligned to the center of PSD2 and reflected off the PSD2 surface in a direction toward the center of PSD1, while the laser pointer is located at the position 3 and 4. Hence, four sets of robot joint angles can be recorded by the robot controller. The idea behind this method is to be able to overcome the problems we faced with the previous calibration system. By definition, the reflection from one sensor to another, should create a perfect intersection point between two lines. By aiming the two laser spots in the center of each sensor, the ability to find a unique line that passes through these points is guaranteed and as a consequence, improving consistency and reliability in the calibration solutions. Based on the recorded joint angles at these four positions and the robot’s forward kinematics, robot calibration algorithms are developed to identify the unknown parameters. 38 3.3 Calibration System Hardware This section presents the system hardware needed for our calibration approach. Figure 3.4 shows the laser fixture used to attach both the laser pointer and a camera. The use of a camera is crucial in the effort of making the system autonomous, guiding the laser spot to roughly be align into the active area of the PSD, and therefore, avoiding the need of an operator to manually place the robot position and orientation such that the laser beam hit the surface of the PSD initially. Using the so called camera servoing we are able to solve this problem and guide the laser beam to the initial target. Complete coverage of the roles and the working principle of such camera will be explained in details on Chapter 5. Figure 3.4: Laser pointer and camera attached in the robot TCP. 39 Another hardware needed and perhaps the most important of them all is the portable dual PSD calibration device also known as the position sensitive calibration device (PSCD). The PSCD used in this dissertation is shown in Figure 3.5. The working principle of this device will be explained more in details in Chapter 4 as well. Note that this is a prototype, a better, lighter and smaller device might be developed in the future. Therefore, by using only a camera, a laser pointer and our PSCD, the complex task of robot calibration can be performed in a simple and cost-effective manner. Figure 3.5: The position sensitive calibration device (PSCD). 40 Chapter 4 Data Acquisition System This chapter presents the data acquisition system (DAS) used to gather the feedback information needed for the robot controller. It is perhaps the most important component to ensure system accuracy and reliability. First, the fundamentals and working principle of the sensors chosen for this dissertation, i.e. position sensitive detectors (PSDs)will be presented and discussed . Then, a processing board circuit design is analyzed and tested. The position sensitive calibration device (PSCD) prototype is introduced along with the graphical user interface (GUI) used to display the feedback data. Finally, experiments on the PSCD performance were conducted and analyzed to ensure quality positioning that meets our requirements. 41 4.1 Position Sensitive Detector Working Principle In general a position sensitive detector is a sensor capable of tracking the location of a light intensity beam on its surface in the nanoscopic level. It essentially consists of either one or two resistive layers placed on the surface of a high resistive substrate, as shown in Figure 4.1. Such a device consists of three semiconductor layers of which only the top one is used to determine the position in the case of one dimensional PSD. In two dimensional devices the bottom layer is used in a similar manner to collect positional information. Figure 4.1: Schematic of a one dimensional PSD chip. The working principle of a PSD is quite simple. If the top P-layer is stimulated with a beam emitted from a light source, an electric charge is generated that is proportional to the light intensity. This formed potential in the resistive layer causes photo-currents to flow between the spot of stimulation and two electrodes on either end of the layer. Due to the uniformity of the resistive layer, I1 and I2 are inverse proportional to the distance between the location of the potential, i.e. laser beam light location, and the respective electrode. The above statement can be summarized in the form of an equation as follows, 42 U =R·I →I = U R (4.1) Assuming uniformity of resistive layer, i.e. ρ = const and A = const, the following equation holds true, R=ρ· l →R∼l A (4.2) Combining equations 4.1 and 4.2 we have; I∼l (4.3) where I is the occurring current (A), U is the generated potential (V ), R the resistance of the photo-active area (Ω), l the distance between the light spot and respective electrode (mm), A is the cross-sectional area of the resistive layer (mm2 ), and ρ the specific electrical resistance of resistive layer material (Ω · m). Therefore, the relation between the location of the laser beam spot and the occurring photocurrents can be expressed as [32]; Lx Lx −∆ −∆ I1 = 2 · (I1 + I2 ) = 2 · I0 Lx Lx (4.4) Lx Lx +∆ +∆ I2 = 2 · (I1 + I2 ) = 2 · I0 Lx Lx (4.5) I1 − I2 2∆x I Lx − 2∆x = ⇔ 1 = I1 + I2 Lx I2 Lx + 2∆x (4.6) Hence; where, I1 and I2 are the Output currents of resistive layer(A), I0 is the total photo-current I1 + I2 (A), Lx is the total length of active area (resistive layer) in between electrodes (mm), 43 and ∆x is the distance of laser beam spot from the center of the PSD(mm) There are two main types of PSDs produced by various manufacturers worldwide, lateral PSDs, and segmented PSDs. Both types are produced in both one-dimensional and two-dimensional versions. However, they have some things in common that are unique and superior compared to other optical tracking devices. They offer outstanding positional resolution for a wide spectral range of light used to stimulate the PSD. They also respond to changes almost without delay even without any additional biasing efforts that can be performed to reduce the delay. In this dissertation the two types are analyzed to compare the advantages and disadvantages that could lead to a better PSCD design. 4.1.1 Lateral PSDs The first type of PSD is usually referred to as lateral PSD. This type of PSD is also available in one-dimensional and two-dimensional realizations. The one-dimensional version is shown in the previous section (Figure 4.1). The chip used for 2-D position detection is manufactured in a similar way. The only difference is that the bottom-layer is also equipped with two electrodes. This second layer works exactly the same way as the top layer. The only difference is that the electrodes mounted on the bottom layer are aligned in a 90 angle relative to the top layer to represent the Y-axis as shown in Figure 4.2. Also in this figure, the sensor is compared with a dime in terms of size. In the lateral PSDs, the relative two-dimensional position on the active surface of the chip can be expressed as: I − Ix2 X = x1 Ix1 + Ix2 (4.7) Iy1 − Iy2 Iy1 + Iy2 (4.8) Y = 44 Figure 4.2: a) Photograph of a lateral PSD. b) Schematic illustration. where X is the relative laser beam position on the X-axis (mm), Y is the relative beam position on Y-axis (mm) and Ix1 , Ix2 , Iy1 , Iy2 are the photo-currents measured (A) (Figure 4.2). The top resistive layer is used to determine the beam location in X-axis direction. Therefore two electrodes are mounted on the left and right end of the active area to gather the occurring photocurrents. A bottom layer is equipped in a similar manner to determine location in Y-axis direction. The major advantage of this type is that the accuracy of the output is not affected by the spot profile of the beam or its intensity distribution. The positional resolution is lower than the one offered by the segmented type. The achieved resolution of 0.5µm. Another outstanding property of this type is the position linearity over the whole active surface of the chip. This is important for our task since it allows us to keep the error at a low level during the mapping and compensation process. Furthermore, the area of the chip that offers high resolution is a lot bigger than the one of the segmented. 45 4.1.2 Segmented PSDs Segmented PSDs are common substrate photo-diodes that are divided in segments and separated by a gap as shown in Figure 4.3 . This gap, also referred to as the dead region, is a section of the chip that is not affected by any form of light stimulating it. The gaps are necessary to electrically isolate PSD segments. Segmented PSDs are produced with either two (one-dimensional) or four (two-dimensional) segments. When light hits the active surface, photo-currents occur in each segment. Those currents can be measured at the respective electrodes attached to each segment to determine the position. Figure 4.3: Schematic illustration along with the photo of a segmented PSD. The operation principle is also simple. When a beam of light hits the surface of the chip, photo-currents are generated as explained in the previous section. Again the currents are proportional to the intensity of the light the segment is exposed to. Therefore the relative position of the beam can be expressed as: (IB + ID ) − (IA + IC ) IA + IB + IC + ID (I + IB ) − (IC + ID ) Y = A IA + IB + IC + ID X= 46 (4.9) where X is the relative laser beam position on X-axis (mm), Y is the relative beam position on Y-axis (mm), and IA , IB , IC , ID are photo-currents (A) measured in PSD segment noted in the indices as shown in Figure 4.3. The position in X and Y direction can easily be calculated based on the photo-currents that can be measured in each segment. The achievable resolution, i.e. minimal detectable change in beam position, with this type of PSD is approximately 64 nm. However there are several restrictions that need to be fulfilled in order to get correct results: • The beam has to overlap all segments at all times. • The diameter of the focused beam has to be larger than the gap in order to reach the active area and generate an output. • The beams intensity distribution must the uniform since photocurrents in the respective segments are proportional to intensity. After carefully analyze and compare both PSD types, in terms of accuracy, delay and signal properties, the segmented type was chosen for this dissertation. 47 4.2 PSD Processing Circuit Design Before the actual calibration task could be approached, a signal processing circuit had to be designed and set up in order to test the properties of the chosen PSD chips (i.e. position linearity over active chip area, positioning sensitivity etc.). Since the active chip area of the PSD is comparatively small the measurements had to be performed applying high precision tools to position the laser beam over the surface. Therefore, a Signatone CAP-945 high precision probe was used to acquire sufficient accuracy to calibrate the chip. The purpose of this board is simply to process the PSD sensor raw output to be able to determine the relative 2D position of the beam on the chip’s surface. The design can be structured into three functional stages. In the first stage, the output signal of the four PSD electrodes is amplified using operational amplifiers. The second and third stages are used to perform the computations needed to be able to determine the spot position, essentially implementing equation 4.9. In the second stage summing amplifiers and differential amplifiers are occupied to generate the input signals for the divider (third stage). A sample of a recommended circuit can be found in [32] as shown in Figure 4.4 in the next page. 48 𝐶𝑓 𝑅𝑓 𝑅𝑓 𝑅2 𝐶𝑓 𝑅𝑓 𝑅2 𝐶𝑓 𝑅𝑓 𝐶𝑓 𝑅2 𝑅𝑓 𝑅2 𝑅2 𝑅2 𝑅2 𝑅2 Figure 4.4: Schematic of the circuit board used with segmented PSDs. 49 4.2.1 PSD Experimental Performance Results After carefully analyze the circuit schematic, the processing board is designed and the component values were set. Therefore, the first experiments concerning the PSD sensor can be performed at this point. For this purpose, the same type of laser pointer used in the laser fixture on the actual robot was fixed on a Signatone CAP-945 computer aided probe (CAP) perpendicular to the PSD surface. This device allows to position the laser in 3 DOF (x,y,z) over the PSD chip surface with high precision (accuracy 2.5µm, repeatability 1µm). The PSD chip was fixed on a floating, shock absorbing workbench together with the probe to eliminate external influences of e.g., operator movement and vibrations. After the setup was prepared two different experiments were conducted with segmented PSDs. The first experiment was completed to determine if the positional output of the board is dependent on the direction the position is reached from. Therefore, the laser was moved in a straight line over the PSD surface in both directions. First, from bottom to top and then vice versa. The second experiment deals with the linearity of the output signals of the two positional channels (X,Y). In order to achieve results with sufficient positional resolution the laser was moved over the chip surface in two cycles. In the first cycle, the probe was moved on a straight line parallel to the y-axis with a line spacing of approx. 0.2 mm. Therefore, pseudocontinuous y-positions with discrete x-locations were recorded. In the second cycle, the y-position remained discrete in 0.2 mm steps while the x-position was altered continuously. Figure 4.5 along with Figure 4.6 show the correlation between the segmented PSDs physical surface layout and the respective signal recorded during sweep. The almost linear inclining section at the beginning (0 to 800µm) and end (6800 to 7600µm) of the sweep are caused by the laser beam moving in and out of the photo-active area. Except for the section 50 Figure 4.5: Segmented PSDs physical surface layout Figure 4.6: Plot of the signal recorded during sweep. 51 close to the middle gap (3700 to 4200µm) the sensor shows a rather linear output in respect to the change of beam position. As presented in Figure 4.6 the changes in signal can again be traced back to the design features of the sensor. The data sets presented were gathered while performing a y-axis sweep with an x-location slightly right of the middle gap. In the first part, a steady and almost linear incline in output voltage can be observed (position 0 to 800µm). This can be associated with the sensor approaching the photo-active area form the outside and slowly moving positive y-direction towards the middle. Once the peak is reached, the signal maintains at nearly the same level while the location of the laser beam was steadily altered. Therefore, it can be determined that the relationship between position and output, is strongly non-linear for this section. As the beam traverses the middle gap section (3700 to 4200µm), the output starts to decline again. In this section, a very linear change of output to position can be observed. Since the change of amplitude of this region is considerably high, the positional resolution can be assumed to be at its best at this section. Once the beam passed the section, again the same pattern as before occurs. The output maintains at almost the same level as before declining as the laser beam reaches the edge of the active area (6800 to 7600µm). Now that the general signal properties of the segmented PSD output were discussed, signal characteristics can be analyzed in more details. A very important one is the occurrence of hysteresis effects. The results are presented in Figure 4.7. Figure 4.7 shows the plot of two sweeps at identical positions on the segmented PSD but with opposite directions. Minor hysteresis effects can be found especially in the first section (y-position 0 to 2000µm). For high precision area around the middle gap no significant signal deviation of the two recorded outputs (one moving in negative- the other one in positive ydirection over the same part of the surface) are noticeable. Once this region is left however, 52 Figure 4.7: Sweeps at identical positions but with opposite directions. the signal recorded moving in negative y-direction (red) shows a steady offset in the section close to the edge of the active area (2000 to 0µm). Although similar experiments were performed for the Lateral PSD, in this dissertation only the segmented PSDs are presented in details since the later was the chosen one. In general we choose the segmented PSD because of its superior resolution and good repeatability. 4.3 Position Sensitive Calibration Device (PSCD) To implement the design of the controller, we first need to be able to get the accurate feedback the controller needs to control the robot movements in the way required by the calibration method. After adequate feedback is provided and tested, the controller can be designed in Chapter 5. 53 120.00° 100 2 100 150 95 5 19 50 45.00° 100 24 4X 3 2X 4 19 5 50 95 Figure 4.8: PSCD top geometrical design. 54 88.60 Our proposed calibration system approach requires a laser beam to shoot into one PSD in a way that the reflection lands into the other PSD. The idea behind this is to be able to find a unique line based on these two points (found accurately). Therefore we employed two PSDs fixed with an angle between them to found such unique line. As shown in Figure 4.8, the angle between them was chosen to be 120.00o . We found that more than 120.00o will make it difficult to find a good reflection from one PSD to the other, and less than 120.00o will make the system less accurate. Distances between the sensors as well as the distances between LEDs were carefully design and crafted for computational purposes. Internally our device must be able to carry the signals from the PSDs to the computer in order to control the robot to the desired state. Figure 4.9 highlights the internal components of the device as well as the interaction between the device and the robot controller. Y TCP Frame {E} Z TCP Position Sensitive Calibration Device Focusable laser PSD s X Laser Beam Battery power Robot Z Y X Robot Base Frame {B} Robot controller USB wireless hub USB A/D Circuit board LAN PC -based controller USB wireless Figure 4.9: PSCD internal components representation and DAS interaction. After the processing circuit board gets the raw data from the two sensors, the signals are taken by a wireless USB hub from the data acquisition card. The feedback produced by the device have around 47ms of delay which is acceptable for our application. Once the 55 data reach the computer we implement our PC-based controller through the graphical user interface so that the robot TCP move to the desired position relative to the sensors. 4.3.1 Graphical User Interface (GUI) The graphical user interface was design to provide a user-friendly representation of the calibration process. It shows the user option buttons and real time process of PSD-based localization servo as shown in Figure 4.10. The computer-based controller, and calibration algorithms (explained in details in Chapters 5 to 6) are embedded in the designed GUI. Figure 4.10: Graphical user interface. Note: The text in this image is irrelevant 56 4.3.2 Feedback Mapping and Testing Figure 4.11 shows the Plot of the recorded sweep data along x-axis (blue) and the estimated matching polynomial. Middle region shows almost perfect match, but significant deviation occurs close to the edge on both sides. Change in x-location will change the ysignal even if the y-position is maintained constant. This is based on the fact that the lines of equal potential show cushion shape i.e. signal output for constant y-location while changing the x-location is represented by a parabola. Figure 4.11: Plot of the recorded sweep data. Hence, the signal needs to be decoupled first to achieve satisfactory mapping results. Therefore a decoupling matrix DA is employed as follows;      x   x    = DA ·   y y 57 (4.10) After the signals are decoupled, the mapping result has to be rotated around the origin (0, 0) to map the PSD axis to the coordinate axis. Hence, a 2D-rotation matrix R is used as follows:          x   cos θ − sin θ   x   x   = · =R·  y sin θ cos θ y y (4.11) This results in the final mapping position (x , y ). Figure 4.12 shows a Screenshot of PCinterface software showing trace of beam movement along straight line in x-direction (left) and y-direction (right). The trace shows the already assumed deviation at both ends of the signal cause by the use of just a single mapping polynomial for the whole active PSD surface. Never the less, the trace follows the beam movement precisely in the middle of the chip. Figure 4.12: Trace of beam movements along x and y directions. Figure 4.13 shows a trace of beam moving in a square pattern. The small square trace (left) only covering the middle section of the active area shows almost no deviation from the movement (lines straight, 90 angles) while the trace of the bigger square covering the outside areas of the chip shows the expected curvature and deviation. 58 Figure 4.13: Trace of beam moving in a square pattern. Both tests also confirm once more how critical is to use the center of the sensor to perform our approach to achieve a high accuracy positioning. 4.3.3 PSCD Feedback Experimental Results During the PSCD performance experiments we essentially test the feasibility of the developed device (after all components were assembled and mapped) to provide feedback to the PC-based controller. After proper calibration and mapping of the sensors were performed, we were able to determine with high accuracy, the location of the laser beam in the nanoscopic scale. For the first stage of the experiment, the same type of laser pointer used in the laser fixture on the actual robot was fixed on a Signatone CAP-945 computer aided probe (CAP) perpendicular to the PSD surface as shown in Figure 4.14. This device allows to place the laser beam in 3 DOF (x,y,z) over the PSD active surface with high precision and accuracy (accuracy 2.5µm, repeatability 1µm). The second stage of the experiment deals with changing in orientation of the beam while maintaining the beam in the center of the 59 first sensor simultaneously. In this case we used Newport 481-A to control orientation of the laser over this point. Figure 4.14: PSCD experimental setup. Figure 4.15 shows the results of the experiments. The lower plane shows the feedback from the PSD1 while the upper plane shows the feedback obtained by the PSD2. For the first stage, we draw linear movements to show how accurate we can determine and control the laser spot over the sensor and eventually reach the center of the sensor which is ultimately the point we are looking for. After it reaches the center of PSD1, we proceeded to control orientation to be able to find the center of PSD2. The ability of the PSCD to carry accurately the feedback information needed for the controlled system is verified by the results obtained from the GUI and presented in Figure 4.15 on the next page. 60 Figure 4.15: PSCD experimental performance results. 61 Chapter 5 Robot Control The subject to be discussed in this chapter is amongst the most crucial and important elements of the proposed calibration system: the robot controller. During the calibration process the controller is required to move the robot TCP to a centered position over one of the two PSD sensors automatically. A very similar problem is known in the field of robotics as visual servoing. A good introduction to this topic can be found in [14]. Afterwards, the reflection from the PSD sensor must also be controlled to reach a centered position of the other sensor simultaneously. It is a challenging problem that must be overcome in order to make the entire calibration system an automated and faster procedure. After adequate feedback is provided and tested, a higher level of control can be designed that satisfies the calibration process goals. First, a breakdown of the controller design will be discussed and presented. Simulation results will also be presented at the end of Chapter 5 to test the feasibility of the designed controller. 62 5.1 Controller Design Overview In order to address the problem we used a blend of visual and PSD-based servo controllers divided into 3 stages. All stages are meant to control the same robotic system, the difference will be the type of errors determined by different feedback sources. In the first stage, the robot TCP will be controlled by image-based visual servo control. A good piece of work related to this topic can be found in [33]. At this point the control task will be limited to find a rough approximation of one of the two sensors as well as to determine the length of the laser line using the method described in [34], also shown below in Figure 5.1. laser laser laser CCD CCD h h CCD h d h=hs h>hs h