«EL! Itiri’ i. . . .1... $6.... . x . ‘ a . 1...? . . v1.1.5... v. v. . .3 tea 2.. . .. .. . . < J. . t. ...v..m....,nur.x. 1...... r5: _. . a 1 M2...: 14 .3 5. \3A 2765...... 5471-. 3.5.». .x 3. an: :L. :. , Bali: . _ :2 L. Q ... 3.24.... 1:71.? 4...: .vr'tx:al. . .1 3. .1 .3. via. 1.... 39...}. :1: xx: : a: b . at TLHHflVJ . . . a: .3! V3.0..Vltps v.4 .22..- a..%.. flux... 21:”. “fizwbfimma... I tyraxzyaglb... Jh..5.L: .p . . huvunrawbwugsflfifina. . e‘... a h... vrra.§: £2... . «.93.... f... at... Ada-‘9‘ tarsxz‘. , .1. .2... . . «h I: .5 . . . . , , hi? “Au. . w)- ...9... . 31.10 xlI .. . . .72.. ..r.ll . 3 . ($5.3... $319.... .. ..\ .2 . . $113.1... 925.25.]: I . . .e in». $3... .. i 3.3.. . z: . 3;..31, 3... H. net... .. 2.3.. a 2 :fi 1 w 1‘:' I ‘0 § 1 y . H. nylzhw .r w 3... .33.. .3. #39:: . . L. S .. 2‘00" an: . .. 1 . 11s!) v 5:1! 5.“: .70.... I T . . . .3 ~ .14.). y. “29.1. 91....-. .. . ¢ 3:: < «W 7 .. .J.....:W‘.. . . .2. .. . away... »: .x #53:) .mwt .. I. . .4. ...e .. . a . . 2 ......w..... gag... £35m... _ . -. . .3 2.3 .mmm... .3. ‘ 1...»: . n: WEE-21$ . .‘ I pnpnnv Mncmga: otate University This is to certify that the thesis entitled HYBRID FORCE/POSITION CONTROL IN THE ROBOT TOOL SPACE presented by Amit Yogesh Goradia has been accepted towards fulfillment of the requirements for Master's degreein Electrical Engineering //;-;7 >é~ Major professor Date gZZL/O’ 0-7639 MS U is an Affirmative Action/Equal Opportunity Institution PLACE IN REI‘URN Box to remove this checkout from your record. To AVOID FINES return on or before date due. MAY BE RECALLED with earlier due date if requested. DATE DUE DATE DUE DATE DUE 6/01 CJCIRC/DateDUODGS-DJS HYBRID FORCE/POSITION CONTROL IN THE ROBOT TOOL SPACE Amit Yogesh Goradia A THESIS Submitted to Michigan State University In partial fulfillment of the requirements for the degree of MASTER OF SCIENCE Department of Electrical and Computer Engineering 2001 ABSTRACT HYBRID FORCE/POSITION CONTROL IN THE ROBOT TOOL SPACE By Amit Yogesh Goradia The capability of performing constrained motion tasks is key to many robotic applications that involve material handling, milling, debun‘ing, polishing and surface tracking. Many strategies for performing constrained motion control have been proposed. The major drawback of these strategies is that they do not provide for the manipulator to interact with poorly structured or unknown environments. This is because using these strategies the interaction forces between the end—effector and the workpiece cannot be controlled in arbitrary directions. This thesis presents the design of a new controller implemented in the robot tool space that can control the interaction forces in arbitrary directions. Using the proposed controller, the manipulator can interact with workpieces whose shapes are not pre- specified. Thus tasks like milling, deburring and polishing, that involve constrained motion control of the manipulator can be performed on a wide variety of workpieces without explicit knowledge of the shape of the workpiece. The design and proof of stability for this tool space hybrid force/position controller is presented. A comparison of this new tool space hybrid force/position controller with the existing task space hybrid force/position controller is also presented. Simulations studies for the task of tracking unknown surfaces are also presented. copyright by Amit Yogesh Goradia 2001 to my family iv ACKNOWLEDGEMENTS Foremost, I would like to express my deep respect and sincere gratitude to my advisor Dr. Ning Xi, without whom this work would not have been possible. This work is a result of his keen insight, knowledge, guidance, perseverance, encouragement and support. I would like to thank Jindong Tan from the Robotics and Automation Laboratory for his invaluable help. The discussions with him contributed substantially to my work and broadened my perspective. I would also like to thank my committee members Dr. H Khalil and Dr. R. L. Tummala for their time and valuable comments. I would like to thank Aluel Go from the department of Agriculture Engineering, MSU for his continued support during the period of this research. I am grateful to my friends and colleagues form the Robotics and Automation Laboratory. I dedicate this work to my family for their continuous emotional support: To my parents, for believing in me, to my sister-in-law, for everything she’s done and finally to my brother, who set the standards for me, shared all my success and failures and stood by me through everything. Without their love and support this work would not have been possible. Amit Yogesh Goradia Michigan State University, East Lansing August 2001 TABLE OF CONTENTS List of Figures Chapter 1 1.1 Introduction 1.2 Organization of this Thesis Chapter 2 2.1 Understanding the Problem 2.2 Review of Related Works Chapter 3 3.1 The Manipulator Dynamics Model 3.2 Computed Torque Control 3.3 Task Space Control 3.4 Tool Space Control 3.5 Relationship between the Task Space and the Tool Space 3.5.1 Position error in the tool frame 3.5.2 Velocity error in the tool frame Chapter 4 4.1 Hybrid Force/Position Control 4.2 Hybrid Force/Position Control in the Task Space 4.3 Hybrid Force/Position Control in the Tool Space 4.4 Stability Analysis of the Tool Space Controller vi 17 18 22 26 33 34 36 39 42 46 52 Chapter 5 5.1 Simulation Studies - Setup 59 5.2 Position Control Experiments 60 5.3 Hybrid Force/Position Control Experiments 67 5.4 Tracking a Curve in Two Dimensions 79 5.5 Tracking a Curve in Three Dimensions 89 Chapter 6 6.1 Summary 95 6.2 Conclusion and Future Work 96 Appendix A 99 Appendix B 110 References 1 15 vii LIST OF FIGURES Figure 1.1: A robot in contact with a surface of arbitrary shape Figure 2.1: The robot in contact with the environment Figure 3.1: The robot arm, controller and planner Figure 3.2: Joint space computed torque controller Figure 3.3: The task space computed torque controller for position control Figure 3.4: The robot task frame and the tool frame Figure 3.5: The tool space computed torque controller Figure 3.6: The relationship between the task space and tool space frame of reference Figure 4.1 Hybrid force/position controller Figure 4.2 Task space hybrid force/position controller Figure 4.3: The tool space hybrid force/position controller Figure 5.1: The actual and desired position and orientation for task level position control Figure 5.2: The actual and desired velocities for task level position control Figure 5.3: The error in position for task level position control Figure 5.4: The error in velocity for task level position control viii 18 20 24 27 31 34 41 45 51 63 63 Figure 5.5: The actual and desired positions for the tool space controller expressed in the task space reference frame Figure 5.6: The actual and desired velocities for the tool space controller expressed in the task space reference frame Figure 5.7: The errors in position for the tool space position controller expressed in the task coordinate frame Figure 5.8: The errors in velocity for the tool space position controller expressed in the task coordinate frame. Figure 5.9: Actual and desired positions for task space hybrid force/position control Figure 5.10: Actual and desired velocities for task space hybrid force/position control. Figure 5.11: Error in position for task space hybrid force/position control. Figure 5.12: Error in velocity for task space hybrid force/position control. Figure 5.13: The actual and desired force profiles for the task space hybrid force/position controller ix 65 65 66 66 69 69 7O 7O 71 Figure 5.14: Figure 5.15: Figure 5.16: Figure 5.17: Figure 5.18: Figure 5.19: Figure 5.20: Figure 5.21: The error in the force profiles for the task space hybrid force/position controller The actual and desired positions, expressed in the task space, for the tool space hybrid force/position controller The actual and desired velocities, expressed in the task space, for the tool space hybrid force/position controller The position errors expressed in the task space for the tool space hybrid force/position controller The errors in velocity, expressed in the task space, for the tool space hybrid force/position controller Position errors expressed in the Cartesian tool space for the tool space hybrid force/position controller Velocity errors expressed in the Cartesian tool space for the tool space hybrid force/position controller The actual and desired forces expressed in the task space for the tool space hybrid force/position controller 71 74 74 75 75 76 76 77 Figure 5.22: Figure 5.23: Figure 5.24: Figure 5.25: Figure 5.26: Figure 5.27: Figure 5.28: Figure 5.29: Figure 5.30: The actual and desired forces expressed in the tool space frame of reference for the tool space hybrid force/position controller 77 The force error expressed in the tool space frame of reference for the tool space hybrid force/position controller 78 The cylindrical parabolic surface to be tracked 82 The actual positions and orientations of the manipulator 83 The interaction force expressed in the task space frame of reference 83 The interaction force expressed in the tool space frame of reference 84 The surface profile and the actual profile of the end-effector location in the x-z plane in the task space 84 The task space x, y and 2 components of the surface normal and the z-axis of the end-effector frame 85 The actual position of the end-effector expressed in the task space frame of reference 87 Figure 31: The interaction force expressed in the task space frame of reference 87 Figure 5.32: The interaction force expressed in the tool space frame of reference 88 Figure 5.33: The spherical surface that is to be tracked 91 xi Figure 5.34: Figure 5.35: Figure 5.36: Figure 5.37: Figure 5.38: The actual and desired positions and orientations expressed in the task space frame of reference 92 The actual interaction force expressed in the task space frame of reference 92 The actual and desired interaction forces expressed in the tool space frame of reference 93 Themetricm=x2+y2+zz. 93 The task space x, y and 2 components of the surface normal and the z-axis of the end-effector frame 94 xii Chapter 1 1.1 Introduction Robotics is an exciting area that encourages many engineers to study the synthesis of some aspects of human function by the use of mechanisms, sensors, actuators and computers. There are many fictional representations of futuristic robots that we see in movies, on TV. and in books. These robots are attributed excessive power, exceptional intelligence, and they even look like human beings. They are expected to work for human society in order to relieve humans from low level and heavy-duty jobs. After many years of investigation, people began to realize that human beings are so perfectly created that it is almost impossible to exactly simulate them. The evolution of robotics is closely tied to the development of the technology of controls, sensors and computers. Robot control strategies have evolved from joint level control to task level control and from a non-model based approach to a model based approach. Recently there have been many advances in the field of robot force control and in the development of computation and sensing technology. Given these technological advances, robot force control may eventually replace classical joint level control in order to increase productivity by increasing operational speed and system flexibility. The use of robots for performing industrial tasks has emerged as a major means for implementing industrial automation to achieve the ultimate goal of enhancing productivity and product quality. The industrial robots of the early 19805 were mostly used in relatively simple operations like machine tending, material handling, painting and welding. These robots were basically position-controlled mechanisms utilizing error driven, simple independent joint control laws and did not use a model-based component at all. However, to utilize the robots for more labor-intensive applications e. g., milling, deburring, polishing and surface tracking, more sophisticated control capabilities with a variety of sensors are required. Recent applications of robots require the robot to come in contact with its environment. The environment can be either static (i.e., fixed) or dynamic (i.e., moving) or a combination of both. The capability of the robot to perform industrial tasks, such as milling, deburring and polishing, that involve contact with the workpiece depends on the ability of the controller to control the interaction forces between the robot and the environment. The ability to control the interaction forces could be achieved by the means of extremely precise position control in a pre—specified environment with very low tolerance. Such a scenario could only be true for either laboratory experiments or hard automation tasks where the robot is dedicated to a specific task and jigs with extremely high precision have been designed for the task. Also for such a control strategy the component part tolerances must be maintained at sufficiently low levels, which most likely adds to the cost of the final product. For soft automation tasks performed in uncertain environments, position control alone cannot guarantee the ability to control the interaction forces. Also, the description of assembly tasks in terms of position-controlled motion is unnatural. Force control is a natural control strategy for the assembly of a large class of products with low part tolerances. The ability to control the force exerted by the robot in any arbitrary direction is very important when dealing with tasks that require contact of the robot with poorly structured or unknown workpieces (environment). Force Controllers implemented in literature do not provide the freedom to control the force applied by the robot in any arbitrary direction. Also for these controllers an accurate knowledge of the environment is needed for specifying the constraint frame. Thus, tasks like normal surface tracking, milling and deburring, which must be carried out on objects that are not pre-specified cannot be carried out. For example, Figure 1.1 shows a robot in contact with a surface whose exact shape is not known a priori. Figure 1.1: A robot in contact with a surface of arbitrary shape. (Courtesy Dr. N. Xi, http://www.egr.msu.edu/~xin) This thesis presents a new design of a controller that is implemented in a non- inertial frame (i.e., moving frame). This controller provides for controlling the force exerted by the robot in any direction while simultaneously controlling the position of the robot in the remaining directions. The design also accounts for the lack of knowledge about the exact shape of the object. Thus, the robot can interact with unknown environments and with workpieces whose shape is not exactly known. A potential application of this controller would be in determining the exact shape of an unknown surface. It can also be used in milling, deburring, painting and welding of surfaces whose exact shape is not known a priori. This would save a lot of time in finding out the exact nature of the surface and in planning the robot path along the 1 surface. Also the work piece can be changed instantly without much down time in order to plan for a new workpiece. This controller can also rule out the need for expensive jigs, which are specific to every object, needed for hard automation. Another advantage would be that larger part tolerances can be accepted which will reduce the cost of the final product. 1.2 Organization of this Thesis The scope of this thesis is to investigate the implementation of a force control strategy by means of which the force applied by the robot can be controlled in any arbitrary direction. This work discusses the development of robot controller, implemented in the robot tool space, which can satisfy the above requirement. The discussion is divided into six chapters. In Chapter 2 the mechanism of the problem is explained in detail and a review of ’ related works is presented. Chapter 3 and Chapter 4 present the major contributions of this research. In Chapter 3, the concept of pure position control in the robot tool space is discussed. First the robot dynamics model is given. Then a joint space and task space computed torque controllers are developed. Further a tool space controller is introduced and the relationships between the position, and velocity, in the task, and tool spaces, are explained in detail. Chapter 4 introduces the concept of hybrid force position control in the task space. This controller is not robust and has limited applicability. Thus in order to overcome these drawbacks, the concept of hybrid force/position control in the robot tool space is developed. Stability analysis of a position controller in the tool space and the proposed tool space hybrid force/position controller are presented. Simulation studies are presented in Chapter 5. First, position control experiments for the task space controller and the tool space controller are presented. For the task of moving along a horizontal constrained surface while applying a constant normal force to the surface, the task space hybrid force/position controller is compared with the tool space hybrid force/position controller. Further the tool space hybrid force/position controller is used to track an unknown two-dimensional trajectory while simultaneously regulating the force normal to the trajectory. Lastly, the tool space hybrid force/position controller is used to track an unknown three-dimensional trajectory. Chapter 6 first presents a summary of the work presented and then presents the conclusions and future work. Appendix A presents the definition of Euler angles and the avoidance of singularities associated with the definition of the orientation representation. Appendix B presents the derivation of the tool J acobian and its derivative, which are used to decouple the tool space axes. Chapter 2 2.1 Understanding the Problem Before the manipulator comes in contact with the environment, the environment and the manipulator are two separate systems with no coupling between them. As soon as the manipulator comes in contact with the environment, the end-effector and the environment are pushed together and their surfaces deform. As a result, reaction forces are generated. The reaction forces, depending upon the control force and the dynamics of the system, now couple the originally separated systems. These reaction forces arise from the stiffness or admittance of the environment and the manipulator, which as in most cases is uncertain and nonlinear. Thus the coupled system, comprising of the manipulator and the environment, has uncertain and nonlinear dynamics. Figure 2.1 shows the robot in contact with the environment. It also identifies the task and the tool frame of reference. Previous research studies on force control mostly deal with tasks where the shape of the environment is exactly known. Uncertainty in the shape of the environment is not taken into account. In these cases particular strategies for task formulation can be developed prior to the execution of the task. Thus, the direction in which force is to be controlled can be aligned to the task coordinate system. However, this sort of strategy formulation is precluded in the case where the shape of the environment is not exactly known. Figure 2.1: The robot in contact with the environment. Many industrial tasks require the manipulator to handle various sizes and shapes of workpieces (objects), which, in many cases, are not known during the designing of the controller. For these reasons a generalized control algorithm must be developed which can deal with workpieces whose shape is not known a priori. One solution to the aforementioned problem is to develop a controller that can control the interaction force in any arbitrary direction. An approach to constrained robot motion control is the hybrid force/position control strategy proposed by Raibert and Craig [4]. This strategy identifies the degrees of freedom in which the force and position are controlled individually. The directions in which the force and position are to be controlled must be orthogonal to each other. This method is conceptually much clearer than many other methods as it allows for design of the control law in the same reference frame used to describe the manipulator. Thus, its implementation is more practically feasible. However, the force can be controlled only along the directions of the reference frame used to describe the manipulator. Therefore this strategy does not allow for the implementation of a general control algorithm by means of which the manipulator can interact with unknown objects. The objective of this research work is to expand on the concept of hybrid force/position control in such a manner that the manipulator can interact with objects of unknown shape and perform contour tracking tasks on such objects. In order to attain this objective the manipulator must be able to control the applied force in any arbitrary direction that is not necessarily aligned with the reference frame used to describe the manipulator. 2.2 Review of Related Works Traditional academic study of robot arm control deals with motion in space with no contact with the environment. As the application area of robot manipulators expanded to include such sophisticated tasks as assembly, polishing, deburring etc., the necessity for controlling the interaction forces as well as the position simultaneously was recognized more strongly. Historically, robot force control first began with remote manipulator and artificial arm control in the 19505 and 1960s. These systems were controlled in "natural" ways and lacked task understanding and strategy. With the advent of computer controls, various approaches to the creation of strategies for application to force feedback emerged. However all the approaches depended on people to formulate the details and the systems that were tested experimentally encountered the stability problem. The problem of stability was resolved by using sophisticated control algorithms, faster computation and flexible sensors. In the 19505, Goertz [2] developed mechanical master-slave manipulators with force reflectors for radioactive hot lab work. The operator guided the master with his hand and the felt the contact forces experienced by the slave, which were reflected through the joints of both the devices. In the 19605, Rothchild and Mann [3] led the development of a force feedback artificial arm for amputees. In the 19605 and early 19705, the human Operator was replaced by a computer. However in implementing this transition two major problems emerged: (1) how to structure multi-axis arm systems that sense forces and use them to modify position commands, and (2) how to relate the requirements of a task to the motions required and the forces anticipated so that force motion strategies could be formulated. Since then the main approaches to force control have been [1]: Logical branching feedback method (Eamst 1961): The essence of this method is a set 0 discrete moves terminated by a discrete event such as contact. IF-THEN logic is used in the control strategy. Continuous feedback method (Groome 1972): It was applied to assembly and edge following tasks to maintain continuous contact. It is an early effort in continuous force feedback. Damping method (Whitney 1977, Paul and Shimano 1976): It is an integrating controller in which sensed forces give rise to velocity modification. 10 Position methods: Active compliance (Salisbury 1980): A desired force is calculated based on the difference between the desired and actual hand position. Passive compliance (Watson 1976): The end-effector is equipped with a passive mechanical device composed of springs and dampers. It is capable of quick responses and is relatively inexpensive. But the application of such devices is limited to very specific tasks. Impedance control (Hogan 1980) [5] [6]: An attempt to control the dynamic interaction between the manipulator and the environment. The dynamic relationship between force and position is controlled instead of pure force or position. The control structure is simple and the performance is robust. The drawback is limitation of the dynamic performance. Output force cannot be regulated unless an accurate environmental model can be obtained. A. Hace et. al. [7] developed a robust impedance controller design based on the chattering free continuous sliding mode. Explicit force control (Nevins and Whitney 1973): This method employs a desired force input instead of a position or velocity input. Implicit force control (Borrel 1979): There I no force sensor used for sensing the force. Instead the joint servo gains are adjusted to impart a particular stiffness matrix to the robot arm. Hybrid force-position control (Raibert and Craig 1981 [4]; Mason 1981 [8]): It is one of the most popular force control schemes nowadays. It partitions the force control 11 problem using a set of position and force constraints depending on the mechanical and geometric characteristics of the performed task. Position constraints exist along the normal to the surface where the presence of a surface constrains the range of motion. Force constraints exist along surface tangents where it is impossible to apply arbitrary levels of force. A compliance selection matrix is used to determine the force-controlled direction and position-controlled direction. Various force control methods can be used in the force-controlled direction. The one most frequently used is explicit force control. Hybrid force/position control has given rise to many research works, which build on the same original concept of partitioning of the space into two orthogonal, force controlled and position controlled subspaces [9] [10] [11]. In [12] R. Anderson and M. Spong proposed a hybrid impedance controller, which combines the hybrid control approach with the impedance control approach. [13] presents a general first-order kinematic model of frictionless rigid-body contact for use in hybrid force/position control. These more general kinematics can be used to model tasks that cannot be described using the Raibert- Craig model such as maintaining contact at two points with skew, non-intersecting contact normals. In recent years force control has been studied intensively. Various control methods have been proposed for constraint motion control. These include dynamic hybrid position/force control [14], operational space approach [15], compliant motion control [16] [17], object impedance control [18], hybrid control considering motor dynamics [19] and adaptive force control [20] [21] [22]. In the dynamic hybrid position/force control approach proposed by Tsuneo Yoshikawa [14] the dynamics of the manipulator are taken into consideration. The 12 constraints on the end-effector of the manipulator are described by a set of constraint hypersurfaces. It takes into account the end-effector coming into contact and moving along an arbitrary but known surface, though it does not deal with movement along an unknown surface. In the Operational space approach proposed by Khatib [15], generalized task specification matrices £2 and ii are defined, which act on vectors described in the base frame of reference. The matrix £2 specifies, with respect to the base frame of reference, the directions of motion (displacement and rotations) of the end-effector. Forces and moments are to be applied in or about directions that are orthogonal to these motion directions. The matrix :2 specifies these directions. The end-effector equations of motion are developed in the operational space and control algorithm for free and constrained motion modes is given. This formulation proposed a unified approach for control of constrained manipulators. But it did not address the issue of the manipulator coming into contact with an unknown environment. In order to adopt this approach the exact nature of the task at hand must be known in advance and the task specification matrices must be defined a priori to the execution phase. In [29] Bona and Indri extended the operation space approach in order to achieve exact decoupling of the force and position control along the task subspaces. [50] [56] and [51] point out the fallacy of the fallacy of the theoretical formulation of hybrid force/position control that is based on the concept of the position and force- controlled subspaces being orthogonal. Duffy [50] showed that the orthogonality was not invariant under the choice of coordinate frames and that the conventional formulation 13 therefore lacks physical meaning. [56] shows that the end-effector velocities and forces are not vectors but screws represented in axis and ray coordinates. Then an approach to the decomposition of the manipulator dynamics based on screw reciprocity is presented. In [51] Vukobratovic presents the concept of a new hybrid controller based on the environment dynamic nature and non-holonomic constraints. Few research works have been focused on performing simultaneous position and force control on unknown environments, identifying at the same time their geometry [30]. In [31] a theoretical framework for the problem of exploring the environment and updating a nominal model is formulated. The problem of improving, by repeated experiments, the performance of a force control scheme over and unknown object is investigated in [32]. In [33] a hybrid velocity/force control scheme is described that employs local knowledge of the environment shape, obtained by sensing. [55] discusses the use of the physical concept of reciprocity and consistency to identify the motion constraints. A simple yet interesting approach to surface tracking tasks on unknown objects using C-surface concept is presented in [49]. It employs a task space hybrid force/position controller. In order to control the force along arbitrary directions, the task space axes definitions are changed rapidly in order to align them with the C-surface axes. The drawback of this method is that it cannot be used for surfaces with large curvature. Also stability of the manipulator system during the switching is not guaranteed. Many researchers have addressed the inherent stability and other problems associated with force control. Kinematic and dynamic stability issues are discussed in [34] and [35]. [34] discussed the effect of the kinematic coordinate transformations on stability. It also showed that, hybrid force/position control method of Raibert and Craig 14 induced instabilities for revolute manipulators whereas other force control methods such as stiffness control, resolved acceleration control and operational space methods, do not induce such instabilities. In [36] it was shown that the J "SJ term was the cause of the kinematic singularity pointed out in [34] and a sufficient stability condition was derived for the hybrid force/position controller. In [35] the dynamic stability issues related to force control are discussed. It is shown that the robot system tends to become unstable when in contact with stiff environments, due mainly to the high gain nature of the wrist force sensor feedback. In [16] a nonlinear approach for the stability analysis of compliant motion control was discussed and a bound for stable manipulation was derived. [37] provides an in depth analysis of position and force control of robot manipulators and a stability analysis for robot manipulators under the influence of external forces is presented. The effects of friction and stiction on force control of manipulators are discussed in [38]. In [39] asymptotic stability issues associated with hybrid force/position control are addressed. [40] discusses the stability of impedance control and [41] addresses the problem of Cartesian compliance. Dynamic problems are discussed in [42]. [43] [54] and [44] discuss the important considerations in implementation and the bandwidth problems. Research work based on the aforementioned formulations were presented in [29], [45], [46], [47] and [48]. Many researchers have studied the control of phase transition (from free motion mode to constrained motion mode) and various schemes have been proposed. Approaches to impact control include generalized dynamical systems [23], discontinuous 15 control [25], [26], [11], [27], adaptive force control [24] and jump impact control [28] and positive acceleration feedback control [52] and [53]. The existing schemes of force control can be basically divided into two main categories. The first category is impedance control. It provides a stable uniform control structure for free as well as constrained motion of the manipulator. The problem with impedance control is that in order to regulate the output force an exact model of the environment must be known and integrated into the motion plan. The second category is hybrid control, which is a discontinuous scheme. In this scheme the output force can be controlled without an accurate knowledge of the environment but kinematic and dynamic stability issues must be addressed. 16 Chapter 3 3.1 The Manipulator Dynamics Model The mathematical formulation of the equations of motion of an n degree of. freedom robot arm, i.e., the robot arm (manipulator) dynamics, can be expressed as a set of ordinary differential equations ([57], [58] and [59]). These ordinary differential equations are derived using the Lagrange-Euler formulation for the equations of motion of a rigid body. The manipulator dynamics, in general can be written as: D(q)éj+C(q.q)+G(q)+J’f=r. (3.1.1) y = HUI)» (3.1.2) where z is the nxl vector of the torques applied to the individual joints, q 6 SR" is the nxl vector of the joint angles, (Lije 93", are the nxl joint space velocity and acceleration vectors respectively. D(q) is the an inertia matrix, C(q,cj)is the nxl vector of the centripetal and coreiolis forces and G(q)is the n X1 vector of gravity terms. y is the 6x1 vector of task space positions. H (q)is the forward kinematics i.e., it expresses the task space positions, y , in terms of the joint angles, q. I (q) is the nx6 Jacobian relating the task space velocities to the joint space velocities, i.e., y = 1c]. f is the 6x1 output force and moment vector in the task space. 17 3.2 Computed Torque Control Path Planning and desired trajectory 1 Robot Outputs ,. Tor ue r Robot Arm q.q,y.y.f .m, Controller q ,5 and " Signal Envrronment Figure 3.1: The robot arm, controller and planner. Figure 3.1 shows the basic diagram of the robot and the controller. The input to the robot is the torque, I , and the outputs are the joint position and velocity, Cartesian position and velocity and the output force. The controller maps the error signal input to the output torque. The planner generates the desired trajectory that is used to calculate the error signal, which in turn drives the controller. The computed torque technique is basically a feedfoward control scheme with feedback components. The feedfoward control components compensate for the interaction forces between all the joints of the manipulator and the feedback component computes the correction torques to compensate for any deviations from the desired trajectory. Given the Lagrange-Euler formulation for the equations of motion of a 18 manipulator, the control problem can be formulated as: “Find the appropriate motor torques to servo all the joints of the manipulator in real time in order to track a time- based trajectory as closely as possible”. The individual motor torques are computed with the dynamic model of the manipulator using an estimate of the parameters in the model. Errors in position and velocity (time derivative of position) are fedback to the controller in order to compensate for modeling errors and parameter variations in the model. The structure of the control law has the form (as given in [53], [60]): r = [mm + C(q,q) + C(q) + 1’ f (3.2.1) V : édes + KP (qdes — q) + Ky (qdes _ q) . (3'2°2) Here v is the vector of auxiliary inputs and q d“ , q d“ , 21' d“ collectively represent the robot task expressed in the joint space i.e., the desired trajectory in the joint space. K p and K, represent the feedback position and velocity gain matrices respectively. Figure 3.2 shows the diagram of the joint space computed torque controller. The torque z , which is output by the controller, is the input to the manipulator. The planner generates the desired trajectory, which is then used to calculate the error signal that drives the controller. 19 (for) 6(4) 0 ._, + RobotArm + F + z —’—5 and , Z -—-* 9 9” Environment 6+>~+ t“ , + t J;- ‘ qdes qdes _ aden- Task Planner T and Desired Trajectory Generator Figure 3.2: Joint space computed torque controller. Substituting the computed torque, t , in the dynamics equation of the manipulator (Equation 3.1.1) D(qn + C(q.ci) + G(q) = firqxq'... + K,(q.... - q) + K. (4.... - q» + étqm + 6(4). Assuming that the parameters of the manipulator can be accurately estimated i.e., D(q) , C(q,q°) and G(q), are equal to D(q), C(q,q‘) and G(q) respectively, the above equation simplifies to: D(q)((2i... — £1) + K, (q... - q) + K. (4.... — q>)= 0. (3.2.3) 20 Now D(q) is the inertia matrix and is by definition, nonsingular. The error dynamics of the combined system comprised of the manipulator and the controller can be formed as: é+Kvé+er=0, (3.2.4) where the joint space error e is expressed as e = qd“ - q . Expressing the above equation in state space representation and taking the state vector to be x = [e e], , Equation 3.2.4 can be written as: "— O l 325 x——K -pr’ (--) V The above Equation 3.2.5 can be written as: By selecting appropriate values of position gain, K P and the velocity gain K, it can be ensured that the eigenvalues of the matrix A are located in the left half plane. This would in turn ensure that A is stable. Therefore the above error system is locally asymptotically stable for an appropriate choice of the gain matrices K P and K, i.e., the error e asymptotically tends to zero. This means that, irrespective of the manipulator configuration, the controller system (Equation 3.2.1 and 3.2.2) can track any reasonable trajectories specified as qqu'd“ and 81",“ . 21 3.3 Task Space Control In the computed torque joint space controller described in the previous section, the task to be performed (manipulator trajectories) have to be specified in the joint space (the generalized coordinate space). However, a robot task is generally represented by desired end-effector positions and orientations, and their time derivatives in the Cartesian coordinate frame. Thus, in order to use the joint space controller developed in the preceding section, inverse kinematics of the desired world space (Cartesian space) trajectories would have to be computed in order to get the desired trajectories in the joint space. This would be quite tedious. Instead, the manipulator dynamics and the desired trajectories could be expressed directly in the task space and thereby compute the error feedback terms for the control law in the world space (task space) coordinates ( as is done in [61]). Let ye 9i° be a task space vector defined by y = (x, y, 2,0, A,T)T , where (x, y, 27 denotes the position of the end-effector in the task space and (0, A, T)T denotes an orientation representation (Orientation, Attitude, Tool angles) [62]. The relationship between the joint space position variable q , and the task space position variable y can be represented by: y = H (q)- The velocity in the joint space can be related to the velocity in task space with the help of a Jacobian J o as: i = Jo(q)é. (3.3.1) 22 where J0 is called the OAT Jacobian matrix. Using the above equation the relationship between the accelerations in the two spaces can be expressed as: y = 1,4 + 1021'. (3.3.2) The dynamics model of the manipulator expressed in terms of the task space variables can now be described as: new." (qu — 1'04) + C(qm + G(q) + J’f = r. (3.3.3) The nonlinear feedback control law can now be formulated in the task space as (from [53]): r = D(q)]"(v — joq) + C(q,q) + G(q) + 1’ f, (3.3.4) where J " is the pseudo-inverse of the OAT Jacobian matrix J o . If n = 6 , J 0 is a square matrix, and J " can be replaced by Jo’1 . Applying this to the dynamics model of the manipulator, the manipulator system can be linearized and decoupled as: y = v, (3.3.5) where v is the vector of auxiliary inputs. A position and derivative (PD) control law for the controller can now be formulated as in [53]: 23 v = yd“ + Kym“ — y)+ Kp(yd“ — y). (3.3.6) Figure 3.3 shows the diagram of the task space position controller. The trajectory planner generates a desired trajectory in the task space that is used to generate the task space position and velocity error. The controller uses this error to generate the joint torques 1 . Note that the inverse of the OAT Jacobian matrix, 13' , is used by the controller to decouple the task space directions. EU Robot Arm é and — Environment J‘ — Task Planner I and Desired Trajectory Generator Figure 3.3: The task space computed torque controller for position control. 24 Using the dynamics equation of the manipulator (Equation 3.1.1), the combined linearized and decoupled system of the controller and manipulator can now be expressed as: (y... - 5?) + KM... " y') + K,,(yd.. - y) = 0- (33.7) Now if the error in the task space is denoted as: ey = yd“ — y , (3.3.8) the above Equation 3.3.8 can be expressed as: 22', + Kvé). + Kvey = 0. (3.3.9) It is easy to show that the error system above is locally asymptotically stable for appropriate choices of the gain matrices K p and K, as is done with the error dynamics of the joint level controller in the previous section. However, it should be noted that the existence of the task level controller depends on the existence of Jo’1 , while that of the joint level controller is independent of the inverse OAT Jacobian, Jo’l. If the determinant of J0 is very small, then, the determinant of J;' could be very large, which would result in extremely high joint torques. Robot configurations for which |J0 |= 0 are called singular configurations and task level control cannot be successfully applied at and around such configurations. However, there are several methods using which singularity robustness can be achieved. One of them is singular robust inverse ([63], [64] [65]). In this method the inverse Jacobian is calculated using the damped least-squares technique 25 for calculating the matrix inverse. This method ensures that stable control can be achieved in the vicinity of the singularity although it cannot prevent the instability caused exactly at the singular configuration. Various other methods have been proposed which incorporate singularity avoidance in the path planning [66], [67]. 3.4 Tool Space Control In the previous section, using feedfoward control, the manipulator system was linearized and decoupled as: i.e., the error system applied as the feedback part of the control law was converted to the task space from the space of the generalized coordinate system by using the transform Equations 3.3.1 and 3.3.2. It is important to note that both the above spaces are inertial spaces i.e., stationary spaces. There are certain tasks like online surface estimation, surface tracking, milling and deburring that can be specified more easily and sometimes only in a space located at the end effector of the manipulator called the tool space. Consider h e 9i" being a tool space vector. It is defined as h = (x, y,z,0,A,T)T , where (x, y,z)T denotes a position vector expressed in the tool space, (0,A,T)T denotes an orientation representation (Orientation, Attitude, Tool angles) [62] expressed in the tool space. Figure 3.4 shows the Cartesian coordinate frame and the tool coordinate frame with the current and desired positions of the end effector. The actual and desired 26 positions can be expressed in both coordinate frames i.e., the task frame and the tool frame. Also coordinates of a particular point in space can be converted from one frame of reference to another using linear homogenous transformations. 2 Tool Frame --> r G \Jfl-B -> Y i” Task Frame x Figure 3.4: The robot task frame and the tool frame. It is interesting to note that the actual position and orientation of the end effector always remains a constant when expressed in the tool frame of reference. This is because the tool space is a non-inertial space. Its frame of reference is attached to the end effector and so the frame of reference moves with the end effector, thus the coordinates of the end-effector expressed in this frame are always a constant as is shown by Figure 3.4. In order to convert the joint space controller described in Section 3.2 to a tool space controller, the principle used in the previous section can be applied. There are some difficulties associated with this process. Due to the non-inertial properties of the tool space reference frame, it is not feasible to define reference trajectories in this space. The infeasibility arises from the fact that the reference frame is moving and its exact 27 position and orientation with respect to some inertial frame of reference is not known for every instant of time. Thus without a priori knowledge of the position and orientation of the tool frame during the planning of the trajectory, one cannot specify the reference trajectory in the tool space. To overcome this a priori lack of knowledge of the position and orientation of the tool frame during the planning phase, the desired trajectories can be specified in an inertial reference frame like the task frame. When the robot arm is in motion, the exact position and orientation in the tool frame can be calculated by measuring the joint angles q. The velocity in the tool frame also can be determined by measuring the joint angle velocities q. Using the joint angle and velocity measurements the exact location of the end-effector frame can be determined online. Hence, the homogenous transformation relating the task space and the tool space can now be calculated. This homogenous transformation can now be used to convert the desired trajectory already specified in the task space to a desired trajectory in the tool space. This conversion of desired trajectory from the task space to the tool space needs to be done online. By specifying the desired trajectories in the task space, the advantage of the robot task being represented as end- effector positions and orientations in the world frame can be retained as is done in the task space controller. Let h ya be the homogenous coordinates (rotation matrices) of a point represented in the tool space and task space respectively. ha , ya are both 4x4 matrices. Let T be a 4 x 4 homogenous transformation that relates the homogenous task space coordinates to the homogenous tool space coordinates. Thus, 28 m=nw o4n Using the above equation the desired coordinates expressed in the task space can be converted to the tool space. Let 80 be the tool OAT Jacobian relating the joint velocities to the velocity in the tool frame. Thus, h = Soq'. (3.4.2) The acceleration in the tool space can be derived by differentiating the above Equation 3.4.2 once with respect to time: ii = 30‘? + 304 . (3.4.3) Using the above equation the robot dynamics in tool space can be derived from the joint space dynamics as: D(q)8;' (ii - 3,4) + C(q,C]) + G(q) + 1’ f = 1'. (3.4.4) From the desired task space trajectories, the desired position, velocity and acceleration in the task frame (ym , yd“ and yd“) are known for every time instant. Also the actual positions and velocities of the individual joints (q, q') can be measured for every time instant. Note that the actual position and velocities of the joints are not known during the planning phase. Using the Equations (3.4.1, 3.4.2 and 3.4.3), the actual as well as the desired position, velocity and acceleration of the robot in the tool frame can be calculated online. With the knowledge of the actual and desired values of position, velocity and acceleration the control law in the tool space can be formulated as: 29 r = D(q)sg‘ (v — Seq) + C(q, q) + G(q) + JT f, (3.4.5) where v is the vector of auxiliary inputs and it is defined as: v = ii,“ + Kv(hm — h) + K411,“ — h). (3.4.6) Figure 3.5 shows the block diagram of the tool space controller. Notice that the desired trajectories are specified in the task frame and then converted to the tool frame using coordinate transformations. The actual position and velocity of the robot is measured in the joint space and then converted to the tool space using the forward kinematics and coordinate transformations. The inverse of the tool space OAT Jacobian, 35' , is used to decouple the tool space axes. 30 Coordinate Forward ‘ Transformation] y Kinematics 0—] i: h J j 4 l l l - étm) G(q) b _'+ -2 —-> 81; J q $14. + + ——l l . .+ f RobotArm q. >3 «—> and D ——————.++ Environment - 1 f in... 7;... Z... J’ Coordinate 1 Transformation . . i yMI T ,y*1 yd” Task Planner and Desired Trajectory Generator Figure 3.5: The tool space computed torque controller. The combined system of the manipulator and the controller can now be expressed (rim - i) + K.(Ii.,., - h) + Igor,“ — h) = 0. Let the error in the tool space be expressed as: e,I =hd“ -h. 31 (3.4.7) (3.4.8) Thus, the error dynamics in the tool space given by Equation 3.4.7 can be formulated as: a, + Kvéh + K peh = 0. (3.4.9) Using the state space representation with x = [eh éh]T as the state vector, the above Equation 3.4.9 can be written as: , [0 1 J x: —K —K x v P (3.4.10) It is easy to prove that the error eh in the tool space asymptotically tends to zero for appropriate choices of the gains K p and K v. Note that for the above error dynamics in Equation 3.4.10 to be stable the values of the gains K p and K, must be positive. In this section, the control law for position control in the tool space has been formulated. In the next section the relationships between the joint space, task space and the tool space are explained. These relationships are essential for the implementation of the tool space controller as the actual and desired positions and velocities that are specified in the task space need to be converted to the tool space at every time step. 3.5 Relationship between the Task Space and Tool Space For the tool space controller described in the previous section, there is a need to transform the desired positions and velocities specified in the task space to the tool space. Also the joint positions and velocities measured in the joint space need to be transformed 32 to the tool space. The need to transform these variables to the tool space arises from the fact that the error in position and the error in velocity in the tool space are required to drive the controller. Also the desired positions cannot be specified in the tool coordinate system, as its location is not known a priori during the planning phase. These transformations need to be carried out at the sampling frequency i.e., they need to be calculated for each sample time. Figure 3.6 shows the positions of the task frame and the tool frame. It also depicts the actual and the desired positions of the end effector of the robot arm. The coordinates (position and orientation) of a point in the task space are converted to its too] space coordinates with the help of the homogenous transformation task T tool ‘ This section describes the equations and procedures that are used to calculate the position and velocity error and the desired acceleration in the tool space. task 7:00! I, :l p r T: 1:.) I I I I I I I ’I ll 2 1’ I f I I I I I, I, —— AM P0331011 r . . ’ - ""_ [Issued Fosrtron 0 Y X Task Frame Figure 3.6: The relationship between the task space and tool space frame of reference. 33 3.5.1 Position error in the tool frame The position of the robot in the joint space, task space and the tool space are denoted by q = (q1,q2,q3,q4,q5 ,qé), y = (x, y,z,0,A,T)T and h = (x, y, 2,0, A,T)T respectively. The homogenous position coordinates in the task space and the tool space are denoted by 4x 4 matrices ( ya and ha ), which have form — -4x4 where R is the 3X3 rotation matrix and p is the 3x1 position vector. Let T(q) be the 4x4 homogenous transformation that transforms the positions in the task space to the tool space. Thus ha = T (q)ya- (3.5.1) Given the rotation matrix, R , of the homogenous coordinates of a point in a particular frame of reference the O, A, T angles (coordinates) of that point in that frame of reference can be found using the equations described in [68]. Also the rotation matrix, R, can be derived given the O, A, T angles of a point. To convert the desired positions specified in the task frame to the tool frame, first the homogenous coordinates of the point in the task frame i. e., ya , are derived from the task space coordinates y using the equations given in [68]. From the Equation 3.5.1, the homogenous coordinates in the tool frame i.e., ha can be found using the transformation 34 for W T(q), which is calculated online. The coordinates in the tool frame, h , can now be found by converting the homogenous tool frame coordinates, ha , to the tool frame coordinates using the equations described in [68]. To find the actual position of the robot in the tool frame the same procedure used above can be applied. In this case first the homogenous coordinates in the task frame are derived from the joint angles using the forward kinematics equations. With the knowledge of the actual and desired position in the tool frame the error in position in the tool frame can be defined as: e = h a“ — h . 3.5.2 Velocity error in the tool frame. A general 6x1 velocity vector i.e., y or h, in a three-dimensional Euclidean space such as the task space or the tool space is comprised of two parts. The first is a 3 x1 linear component of the velocity denoted as v and it can be written as v = (x, 51,2)T . The second part is a 3 x1 vector and it can be comprised of any general representation of the orientation velocity. The orientation representation chosen in this thesis is the OAT representation described in [68]. Thus the second part of the velocity vector is formed as (0,11,1‘)’. Now the velocity vectors in the task and tool spaces can be expressed as 5’ = (i,9,2,0,A.T)T and h = (i.5’.2.0,A,T)T respectively. Let a) = (wx,wy,wz)r be a 3x1 angular velocity vector which represents the angular velocity in the x , y and 2 directions in a three-dimensional Euclidean space. 35 5P? l .{J Depending on the choice of the orientation representation, a linear transformation, (1), which is a function of 0, A and T , can be defined which transforms the angular velocity vector, (u =[aIJK a1), wZ]T, in a three-dimensional Euclidean space to the orientation velocity i.e., (0,A,T)T. The derivation of (D is discussed in [68] and [58]. Thus (I) is a 3x3 matrix and it defines the relation between a) and (0,A,T) as: 0 wx A =<1> a), , (3.5.2) T a)z and a), 0 a), =<1>‘l A, (3.5.3) to T if <1)" exists. The velocity error in the tool frame is defined as: é=h -h. des The desired velocity, yd” , is specified in the task frame. To find the desired velocity, ha,“ in the tool frame, one needs to find the transformation that relates the velocity in the task frame to the velocity in the tool frame. The rotational velocity in the task frame is specified as a 3 x1 velocity vector in the OAT orientation representation as 0",“ , Ad“ and 36 Tm. Let the transformation T(q) which relates a position in the task frame to a position in the tool frame is defined as: r- n "1 SS ax px T = "y S) a? p)‘ "2 Sz 02 pz 0 0 0 1 Here [n s a] represents the 3x3 rotation matrix and p represents the 3xlposition vector. Let (1),“, and (I) be the linear transformations that relate the OAT velocities tool (0, A,T) to the angular velocities w, ((11),,chy ,w: ) , in their respective three-dimensional Euclidean spaces, i.e., the task space and the tool space. The relation between the task space velocity and the tool space velocity vectors can now be formulated as (from [57]): r-x_ _nx n.v nz (pxn)x (pxn)y (pxn); [x‘ y sx sy sz (pxs)x (pxs)y (pxs)z y z __ I 0 a, a, az (p>< Using the above equation the tool space desired velocity, ha,“ , can be derived from ya,“ , the desired velocity specified in the task space. Also the actual velocity in the tool space, h , can be derived from y , the actual velocity in the task space, similarly. The above equation can be written as: hm = Ayn,” (3.5.4) 37 Differentiating the above equation once with respect to time the relation between the acceleration in the task space and the tool space can now be derived as: ii,“ = Ayd“ + Aym (3.5.6) Using the above equation the desired acceleration in the tool space can be calculated. The linear controller output, v, can now be calculated using h, h, h, es’ hm, hm K p and K, using the Equation 3.4.6. It should be noted that the transformation A is composed of 3 separate matrices as: I 0 I 0 A = F -1 O (Drool 0 (punk Form Equation 3.5.4, it can be seen that F is non-singular. This is because the 3x3 rotation matrix R = [n s a] is non-singular by definition. 38 Chapter 4 4.1 Hybrid Force/Position Control The contact forces generated between the robot and environment, when the robot comes in contact with the environment, need to be controlled. The idea of hybrid force/position control, proposed by Raibert and Craig [4], attempts at controlling the interaction forces between the manipulator and the environment in certain directions, while simultaneously controlling the position and orientation of the manipulator in other directions. It is based on Mason’s concept of using a set of position and force constraints depending on the mechanical and geometric characteristics of the performed task [8]. The task to be performed is broken down into elemental components that are defined by a particular set of contacting surfaces. A set of 'Natural Constraints' is associated with each elemental component that results from the particular mechanical and geometric characteristics of the task configuration. Additional constraints, called 'Artificial Constraints', are introduced in accordance with these criteria to specify desired motions or force patterns in the task configuration. Thus for each desired trajectory in either force or position, an artificial constraint is defined. For each task configuration a generalized surface can be defined in a constraint space having n degrees of freedom, with position constraints along the normal to this surface and force constraints along the tangents to this surface. These two types of 39 constraints, force and position, partition the degrees of freedom of possible hand motions into two orthogonal sets that must be controlled according to different criteria. The natural constraints are used to partition the degrees of freedom into a position-controlled subset and a force-controlled subset. The artificial constraints are used to specify the desired position and force trajectories. A compliance selection vector, 5 , which is a binary N-tuple, specifies the degrees of freedom in the constraint frame that are under force control (indicated by s, = I ). Setting 5]. = 0 specifies the degrees of freedom under position control. Also a compliance selection matrix S is defined as S = diag(s). A general conceptual organization of the hybrid controller is given in Figure 4.1. In the figure there are two complementary sets of feedback loops, position and force, each with its own sensory system and control law that are controlling the same plant i.e., the manipulator. The force and position controlled directions are selected with the help of the selection matrix S . In this control strategy the actuator drive signal for each joint corresponds to that particular joint’s instantaneous contribution to satisfying each position and force constraint. Thus the actuator control signal for the 1"” joint has N components — one for each position-controlled direction and one for each force controlled direction in the constraint frame. The control law is given as: r.- = XlEjlsjAfjhW.)[(I-s,-)Ax,~]}. j=l 40 where 2,. is the torque applied by the i'” actuator, Afj is the force error in the j'” degree of freedom of the constraint frame, Ax]. is the position error in the j‘h degree of freedom of the constraint frame, 1‘”. and WU- are the force and position compensation functions, rh respectively, for the j input and this 1"" output and sj is the j"I component of the compliance selection vector. ym Coordinate q . . Transform ‘ Posmon Feedback - y Position Control . Law and Jig—4° [11'9” —’ Coordinate + Transformation Arm and Environment Force Control + f. Law and 1;: [S] ——" Coordinate Transformation Force * Feedback Coordinate f Transform f... Figure 4.1 Hybrid force/position controller. 41 Using this approach the interaction forces between the manipulator and the environment can be controlled simultaneously with position of the manipulator in free space. However, it is extremely tedious and impractical to determine the force and position compensation functions, 1“,}. and (pa, for all possible configurations of the manipulator and also a lot of inverse kinematics calculations will be required in this process. Further, as shown in [34], this sort of formulation for hybrid force/position control problem gives rise to inherent kinematic stability issues. The instability occurs due to the interaction of the J acobian matrix J and the selection matrix S. A conceptually simpler control strategy for hybrid force/position control of the manipulator, implemented directly in the task space, is discussed in the following section. 4.2 Hybrid Force/Position Controller in the Task Space Using the nonlinear feedback linearization control law from Equation 3.3.4 the dynamics of the robot system can be linearized and decoupled as: i = v , , (4.2.1) where v is the vector of auxiliary inputs (commanded acceleration). The above system is decoupled with respect to the task space axes. Thus the commanded acceleration in any of the six directions of the task space can be controlled independently of the rest of the task space directions. This decoupling is very important for the realization of simultaneous control of forces in the constrained directions and position in the unconstrained directions after contact is established. Let the superscript u 42 denote a quantity in the unconstrained directions and the superscript c denotes a quantity in the constrained directions. The Equation 4.2.1 can be written as y“ = v" and 53‘ = v‘. [52] Before impact, position control is required in both the unconstrained as well as constrained directions. Applying the PD position control law from Equation (3.3.6) v" = 53;“ + Kv(y;‘“ — y”) + K1905“ — y“) t S t“, , (4.2.2) v( = jig“ + Kv(yfm — y‘) + Kp(y§“ — y") t S tsw , (4.2.3) where tmis the instant of the detection of impact. By applying the above control schemes, the error dynamics in the unconstrained as well as constrained directions before impact can be expressed as: 'e'“ + Kve'“ + 19;“ = 0, (4.2.4) 2:" + Kve" + 19;” = 0, (4.2.5) where e“ = y" - y" and e‘ = y‘ - y‘. Usin conventional, state 5 ace ole lacement des dc: g p p p methods, the desired transient and steady state responses can be achieved. After detection of impact, position control is still needed in the unconstrained directions. Applying a PD position control law similar to Equation 4.2.2 the desired transient and steady state responses can be achieved. v“ = y" + “(5);“ — y’)+ Kpu(y;“ - y") t 2 ts“, (4.2.6) 43 It is clear, from the above equations that the error dynamics of the position tracking of the closed-loop system are asymptotically stable. For the constrained directions explicit force control is used. This makes the knowledge of the exact force model unnecessary for contact force control. The objective of the force control scheme is to track a desired force trajectory in the constrained directions. A force control scheme of the following form is considered: v: = K... (ft. — f‘) + K... (a... — f‘) + K. j (fr... — f‘)dt rs 2.. (42.7) where K p, , K vf and K ,f are the position derivative and integral gains for achieving the desired performance and ff“ and f ‘ are the desired and the actual forces in the constrained directions expressed in the task space coordinates. Using this task space hybrid controller one can control the force applied by the robot in any of the task space directions independently of the rest of the directions. Figure 4.2 shows the diagram of the task space hybrid force/position controller. The matrix [S] is the selection matrix that is used to select the directions along which force is controlled. The matrix [I - S] selects the directions along which position is to be controlled. The desired position trajectory generator generates the desired position trajectories and the desired force trajectory generator generates the desired force trajectories in the task space. The sufficient conditions for stability for this controller are provided in [36] Task Planner and Desired Position Trajectory Generator + yin ya, ya: y z ‘r ,, Forward j 2 ‘L 3’ K'nematics l . l + » + j V .. I 2 ° C(44) G(q) ' r [1.3] r - i Robo Arm 1 -. t . J . ‘ Z 0 j H + 2 f and q [S] ’T D :,+ Env'ronment j .3: l + ii Ir J” +% ' fan , Desired Task space fa, Force Trajectory Generator Figure 4.2 Task space hybrid force/position controller. A major drawback of this strategy is the restrictions on the directions in which the force can be controlled. This means that for this strategy to work the constraint surface must be aligned along the task space directions in order to completely control the forces 45 of interaction. As pointed out by Duffy [50], this is a big restriction when dealing with objects that are not aligned with the task space axes. For a constraint surface that is not aligned along the task space axes the interaction force cannot be controlled unless the controller axes are aligned with the object surface. This means that the controller equation needs to be changed online to guarantee the alignment of the object surface and the task space axes, which may lead to instability of the system. Also this hybrid force position control scheme is not robust to the occurrence of impact in a direction where motion control has been planned [69]. In the case of such an impact, the end effector is not compliant along that direction. This will give rise to large interaction forces between the end effector and the environment, which cannot be controlled and will damage the manipulator or the environment. Further this scheme requires a detailed knowledge of he environment geometry, and therefore it is unsuitable for use in poorly structured environments and to the occurrence of unplanned objects. For the reasons mentioned above it is not practical to implement hybrid force/position control that can interact with environments whose shape is not known a priori. The next section presents the design of a new controller that can overcome the abovementioned drawbacks of a task space controller. 4.3 Hybrid Force/Position Controller in the Tool Space It is important to note that the above idea of hybrid force/position control is based on the notion of ‘orthogonal complements’. This means that the force and the position are controlled in orthogonal (non-overlapping) subspaces of the task space. Continuing this idea of orthogonal complements further, hybrid force/position control can be 46 implemented in the tool space. The advantage of this method is that the tool space is not a fixed (inertial) space. It is constantly moving along with the movement of the end- effector. Thus, if the tool frame can be dynamically positioned in such a manner that it’s axes align with the direction of the force to be controlled, then the force can be controlled in any arbitrary direction. However this method does pose some challenges in implementation as described in Section 3.4. This is because the frame in which the control is implemented is a non- inertial frame and the exact location of this frame is not known a priori during the planning phase. Thus specifying desired positions, orientations, forces and torques in the tool reference frame poses a challenge. But taking into account the fact that the exact location of the tool frame is known during the execution phase, the desired positions and orientations could be specified in an inertial frame, e.g. the task frame, and then converted to the tool frame online using coordinate transformations. This conversion is abetted by the knowledge of the exact location of the tool frame with respect to the task frame during the execution phase. The desired force input is specified directly in the tool reference frame. This is because the force to be applied by the manipulator is constant in the ’z’ direction of the tool reference frame, irrespective of the position and orientation of the tool reference frame. The interaction force is measured using a wrist mounted force torque sensor and can be transformed directly in the tool reference frame with the help of a linear transformation. 47 From Section 3.4, the dynamics of the robot manipulator expressed in the tool space can be written as: r = Duns: (ii — 8.4) + C(44) + G(q) + 17f. Using the control law formulated in Equation 3.4.5, 1 = Duns: (v - S.c1>+ C(44) + mm + ft. The manipulator system can now be linearized and decoupled as: i = v , (4.3.1) where v is the vector of auxiliary inputs (commanded acceleration). The above system is decoupled with respect to the tool frame axes. Thus the commanded acceleration in any of the tool space directions can be controlled independently of the rest of the tool space directions. Let the superscript u denote a quantity in the unconstrained directions of the tool space and the superscript c denotes a quantity in the constrained directions of the tool space. Thus the Equation 4.3.1 can be written as two separate equations: h“ = v“ and h‘=v‘. 48 For the unconstrained directions, position control is required before and after impact. Applying the tool space PD position control law from Equation 3.4.6 in the unconstrained directions: v“ = 133,, + K: (5;, -4 h“ ) + K; (hju - h“) (4.3.2) Using the above Equation 4.3.2, the position error dynamics in the unconstrained space can be written as: e'“ +Kfé“ +K;e“ =0, (4.3.3) where e“ = 12;, - h“. Thus, desired transient and steady state responses can be achieved using conventional pole placement methods. For the constrained space, before impact, position control needs to be applied. After detection of impact, the interaction force needs to be controlled. PD position control in the tool space is used in the constrained directions before the detection of impact. After the detection of impact, explicit force control is used. Thus the constrained space auxiliary input can be expressed as: v‘ =55“ + K:(h;,, —ri‘)+ K;(h;,, —h‘) tS rm, (4.3.4) v‘ = va(fd:, -ft)+1.Q.N.‘<. a. "leQNke H "1>QN‘>+ des . . ' (4.4.8) [51(K.,(f..:. - f‘)+ K,, (f4... -— f‘ ) + K, j(f.‘.. — f‘)dt) Here [S] is the selection matrix that decides which tool space axes are under position control and which are under force control. Combining the equations of the manipulator and the controller the linearized and decoupled equations of the complete system are given by: ii = v The above equation can be expanded as: [I -51ii" +[51ii‘ =[1 -51(ii;;, + Kf(h“ —ri“)+ K; (123,, —h“))+ des . . ' (4.4.9) [51(K., (f4. — f‘) + K, (ft. - f‘) + K,, j (de. - f‘)dt) Equating the quantities in [S] and [I —S], we get two separate equations for the constrained and unconstrained directions respectively as: it = va(fd:3 -f‘)+ Kp,(fd‘“ —f‘)+K,, [(fdg, -f‘)dt (4.4.10) for the constrained directions and, h“ :1}; Hr: (rigu —h“ )+K; (1);“ -—h") (4.4.11) for the unconstrained directions. 57 For the unconstrained directions, the error dynamics can be expressed as: 4'" — K54" — ng" = 0. (4.4.12) For appropriate choices of K g and K f the above error dynamics can be proved to be stable. For the constrained directions, let us assume that the actual acceleration, h", in these directions is zero. Thus the Equation 4.4.10 can be written as: K.) (f4; — f‘)+ K.) (f4; — f‘)+ K), [0.2. — f‘)dt = 0 (4413) We can easily prove that the above Equation 4.4.13 is asymptotically stable for proper selection of K pf , K v! and K ,f. Thus we can prove that the combined system of the manipulator and the controller is asymptotically stable. 58 Chapter 5 5.1 Simulation Studies - Setup Intensive simulations were conducted at the Robotics and Automation laboratory at Michigan State University in order to verify the theoretical results. A 6 DOF Puma 560 robot arm manipulator [68] was used for the simulation studies. The end effector was mounted with a jr3 force/torque sensor, which senses the forces and torques directly in the hand coordinate frame. The sampling period for the control software was set at lms. Experimental comparisons between the existing Task Space Controller and the proposed Tool Space Controller for pure position control were conducted. Simulation studies for the proposed Tool Space Hybrid Force/Position Controller in contact with a compliant surface aligned along the manipulator reference frame were carried out. The results were compared with the existing Task Space Hybrid Force/Position Controller performing a similar task. Finally simulations were carried out on the proposed Tool Space Hybrid Force/Position Controller in contact with a constraint surface of unknown shape. The constraint surfaces were simulated as linear spring dampers with large stiffness coefficients [70]. Thus the environment is modeled as: 59 F(I)= -K.(X(t)-xo). where F (t) is the force generated by the environment, x(t) is the actual position of the robot, x0 is the position of the wall and K, is the stiffness coefficient. For the experiments K, was taken to be 1000 N/m. Also the moments about the x, y and z axes were assumed to be zero for simplification purposes. A frictionless contact was assumed between the end-effector and the environment. This scenario, though unrealistic, can be simulated in actual practice using a roller ball contact assembly. 5.2 Position Control Experiments Simulations were carried out to compare the task space controller and the tool space controller for pure position control. To make the simulation more realistic dynamically, instead of some straight lines or movement on a horizontal plane, the desired path of the end effector is chosen as a circle with a constant angular velocity to on a plane that is at an angle to the XOY plane. The desired orientations were taken to be continuous sinusoids. The desired paths were: xd“ = 0.707Rcos(wt) + 0.4 yd“ = Rsin(a)t) + 0.3 2.2.. = 0.707 R cos(a)t) + 0.1 0,,“ = -0.55in(wt) + 0.5 Ad“ = -0.55in(ax) + 0.5 Tm = —0.55in(wt) + 0.5 60 where R=0.25m and 60:318.. The same task was simulated for both the task level and the tool level controller. The desired and actual output position/velocity and error in position/velocity of the robot in the Cartesian task frame were plotted against time for both the controllers. The results of these experiments are shown in Figures 5.1 to Figure 5.4 Figure 5.1 - 5.4 show the plots of the experiment carried out for task level position control. It can be noted that the error in position for these experiments is of the order 10‘3 to 10’4 meters. Better results can be achieved by proper tuning of the controller gains K p and K v for achieving desired dynamic performance. Figure 5.1 (a) - (f) show the actual and desired positions and orientations for the task level position controller. Figures 5.2 (a) — (f) depict the plots of the actual and desired velocity in the six independent directions of task space. From the above two figures we see that the desired and actual position and velocity are nearly the same, which indicates that the system has very good tracking properties. Figure 5.3 (a) - (f) plots the position error for the task level position error. The errors in velocity for the task level position control are shown in Figure 5.4 (a) - (f). The errors in position and velocity are very small thereby indicating good dynamic response of the controller. The same experiment was canied out for the tool space position controller. For comparison with the task space controller, the Cartesian task space positions and velocities for the tool space controller are plotted in Figures 5.5 and 5.6 respectively. The 61 desired positions and velocities for the tool space controller are specified in the task space frame of reference, which is an inertial frame. Hence, the error plots for position and velocity of the tool space controller are plotted in the Cartesian task space in the Figures 5.7 and 5.8 respectively. Figure 5.5 (a) — (f) shows the actual and desired positions in the task space frame of reference. The actual and desired velocities are plotted in the task frame in Figure 5.6 (a) — (f) for the tool. space position controller. The errors in position and velocity expressed in the task frame for this tool space controller are also plotted in Figures 5.7 and 5.8. Figure 5.7 (a) - (f) depict the position errors for this tool level controller expressed in the task frame. The velocity errors for the tool level controller expressed in the task frame of reference are shown in Figure 5.8 (a) - (f). The gains for orientation control are kept moderately high to achieve a faster response for changes in the orientation. This would prevent excessively large interaction forces in the directions that are not under force control by aligning the tool space 2 axis with the direction of the sensed force. The errors plots in the task space and the tool space are nearly of the same magnitude and also have a similar profile. This is because the task space and the tool space are related by a linear transformation. Numerous such experiments for position tracking were conducted, which gave similar results. Thus form these experiments it can be concluded that position control in the tool space can be implemented. 62 (a) x position 1 - 0.8f" . .' ..... q X(m) 20 30 Wm) 10 Figure 5.1: The actual and desired position and orientation for task level position control. (a) x velocity 0.6 r' 0.4 ,. . . ...' 4 0 . 8 ‘6 ° WA) v: _ : -o.2 - 4 « .o.s» - o 10 20 30 40 Ween) 1 (d)0anglevelocity 0.6 > 4 -l 1 i 1 0 10 20 30 40 Mm) (b)yposition 1 - a D 0.8 ,... ._. 5; Mace) (e) A angle A; L 10 20 3O 40 funds“) (b) y velocity L . ‘ . , \./ : _og........;......f.,,... . . -0.6 < 0 10 20 30 40 into“) (e)AInglevelocity 0.6 r < '0-6 .;. . . . 4 .1 1 1 L 0 10 20 30 40 Mac) It!!!) (c) 2 position 10 20 30 4O Mus) (c) z velocity 0.4 ..1 0.2 0 -0.2 > -0.4 «0.6r '- 0 10 20 30 40 inches) 1 (flTangle velocity 0-6 4 ‘ ~--E .1 i 4 A 0 10 20 30 40 Mac) Figure 5 .2: The actual and desired velocities for task level position control. 63 (a) )1 position error ) osition error e z osition error 0.01 0.01 (b' y p a 0.01 ( .) P - a 0006 r A 0006 4 0.006 ~ - l a 0.002 » - _ « '3‘ 0.002 r 1 . . ; . 0 WA: 0 W ..«0.002>~- _ wr~,.-0.00247 . ~ .0006 l .0006 4 .0.006 » 0.01 A A . .001 A A A -o.01 A o 10 20 30 40 o 10 20 30 40 o 10 :0 30 40 mm) hence) furnace) 0 angle error e A angle error '1' angle error 0.01 (<3) . . 0.01 (. ) . . 0.01 (.D . . 0006 > 0006 0.006 L 4 -0006 w . < -o 006 0.006 » .0 or A ‘A A 41.01 A A A 001 A A A 10 20 30 40 0 10 20 30 40 o 10 20 30 40 tin-(see) finch“) Mm) Figure 5.3: The error in position for task level position control. (a) x velocity error 0 01 7 (b) y velocity error (c) z velocity error .0006» l A . A 0.01 L A 10 30 30 40 o 10 30 30 40 o 10 20 30 40 Mm) M060) M000) 0 01 (d) 0 angle velocity error 0 or (e) A angle velocity error 0 or (t) T angle velocity error 0.006» A, 0.006- 0 W o o , ‘ < 0.006 » » 4 ~ 0.006 » 41.01 A A A -o.01 A A A -o.01 A A A 0 10 20 30 4o 0 10 20 :10 40 o 10 20 30 40 mm) Mm) Mm) Figure 5.4: The error in velocity for task level position control 1 (a) x position 1 (b) y position 1 (c) 2 position 0.3 0.8 r 0.8 0.6 0.6 » ~ . < 0.6 - 1 0. 4 7 ‘ E \\ /\\ I E y \ IE \ A u 0.4 \/ M < 0.4 0.2 \ /’ \\ /’\ . 0 x / \\/ \d 0.1 < 0.2 . 0.2 - . . 0 0 / 41.4 0 10 20 JD 0 10 10 30 40 O 10 20 ‘0 intone) had-cc) 7.140“) (d) 0 Angle (e) A angle (0 T angle 1 L S I /\ r 0 10 20 40 0 10 20 30 OD 0 11) 20 30 ‘0 finches) find-u) into“) Figure 5.5: The actual and desired positions for the tool space controller expressed in the task space reference frame. (a) x velocity (b) y velocity (c) z velocity 0.6 ~ - , 0.6 i -» . 0.6 - ' 0.4 0.4 l . 0.4 l 0.2 . 4 0_2 . . . q 03 h . s i K . " , 3; ° \/\C/‘C/1 ,3 ° \f\f\ i ° v/\//\\/ I! oz 4 02 -0.2 - , 41.4 41.4 -o.4 41.6 .0.6 r . 0.6 0 10 10 30 40 o 10 20 30 40 o 10 20 :0 40 inches 51100“) HI“) 1 (d) 0 angle velocity 1 (e) A ugle velocity 1 (O T Ingle velocity 0.6 0.6 ~ < 0.6 - A 0.1 . L , ._ .. A 0.2 . .. . . 1 g 0 WV i o [\\/\\\/\ 6’ 41.1 - - -- < -0.2 - . .. - 1 41.6 06 r < 0.6 .1 -1 -1 o 10 10 40 o to 10 40 o 10 :0 30 do find-u) induce) “(00.) Figure 5.6: The actual and desired velocities for the tool space controller expressed in the task space reference frame. 65 (a) x position error (b) y position error (e) 2 position error 0.01 0.01 0.01 0.006L , . - < 0.006 » » 4 0.006 L 30.002» ~ -' ' « 30.002 ~47 -- . 30.002» '- : . ~ , « "-0.sz , 4 4 ”doom 4 i "0.002 » - - ., ,. -0.006 L ~» . .0006 - - - - 3 « .0006 ~ .0.01 A A A -o.or A '- A -0.01 A A A o 10 20 30 40 0 to 20 30 40 o 10 20 :10 40 tine(lee) induce) induce) d 0 angle error (i) A angle error '1‘ angle error 0.01 (. ) 4 r 0.01 v f . 0.01 (8) . . 0.006 » . . + 0.006 L -< 0.006 » €0,002. . '. .i‘ogozL . . . . .i‘ojoozt. f . . .. . q o.0.002L -_ ; - < (0.002» - : - - rah-0.002) ' - - -o.oo6» , ,~ * , . -. 1 .0006» ,. 5 ' ' « -o.006r~ -o.01 A A A -o.01 A A A -o.0r A A A 10 20 30 40 0 10 20 30 40 o 10 20 30 4o “6' (see) W. (la) lune' (3“) Figure 5.7: The errors in position for the tool space position controller expressed in the task coordinate frame (a) x veloci error veloci error (e z veloci error 0.01 . ty 0.01 (E) y ty 0.01 2 . W 4 0 006 i 0.006 » o 006 L ’éooozL-w *3 -v._ , 4 §0.002 4 g 0 P—M-M-A B o A ~ .0 002 L - . L .0002 I . >6 .0 006 41.006 .0 006 .001 A A 4 41.01 A A A -o.01 A A A o 10 20 30 40 o 10 20 30 4o 0 10 20 30 40 rind-cc) Mm) 500000) o 01 (d) 0 angle velocity error 0 01 (e) A angle velocity error 0 01 (t) '1‘ angle velocity error 0.01 * ¥ ‘ -0.01 * t t 0.01 It 0 l 0 20 30 40 0 l O 20 30 4O 0 I O 10 30 0 Wm) Mae) induce) Figure 5.8: The errors in velocity for the tool space position controller expressed in the task coordinate frame. 66 5.3 Hybrid Force! Position Control Experiments. A hybrid task space force/position controller and a hybrid tool space force/position controller were simulated for the task of surface tracking. The task involved moving along the surface of a horizontal table while applying a constant force of 10 N perpendicular to the surface of the table. At the end of 12 seconds a change is affected in the magnitude of the desired force and it is ramped to 13 N within 1 second. The robot is initialized to an initial position such that x = 0.4 m, y = 0.5 m and z = 0.2 m. The initial orientation of the robot is such that it is aligned with the normal of the table and pointing towards the table. In this position the task space OAT angles are initialized to 0 = 0 rad, A = 0 rad and T = 0 rad. Thus the robot is positioned exactly at the surface of the table and it is pointing directly at the table surface. The robot moves towards the table under pure position control and contact is established. As soon as impact is detected the force control loop takes over and the interaction force with the table is regulated to the pre-specified desired value of 10 N. At the end of 10 seconds the robot is made to move along the surface of the table. In order to achieve the motion the desired position of the robot in the x and y direction is changed to 0.5 m from 0.4 m and 0.35 m from 0.5 m respectively. The same experimental setup is implemented for both the task level and the tool level hybrid controllers and the outputs of the two controllers are compared. Figures 5.9 - 5.14 are the plots for the hybrid task space force/position controller performing the aforementioned task of surface tracking. Figure 5.9 (a) - (f) show the actual and desired position/orientation of the manipulator in the task space. Figures 5.10 67 (a) — (f) depict the actual and desired velocities in the task space. The Figure 5.11 (a) — (f) show the Cartesian position errors in the task space and Figure 5.12 (a) — (f) depict the Cartesian velocity errors in the task space for the task space hybrid force/position controller. Note that the position and velocity error in the z direction are not applied to the controller. This is because explicit force control is used in the z direction. Figure 5.13 (a) - (c) depict the actual and desired force profiles for the Cartesian task space x , y and z directions for the task space hybrid controller. The errors in the Cartesian space force in the x, y and 2 directions for the task space hybrid controller are shown in Figure 5.14 (a) — (c). Initially the manipulator is under position control. When the interaction force between the manipulator and the environment exceeds a threshold value, the z direction of the task space is switched to force control. This occurs t = 2.5 s. The force in the z direction is then regulated to a steady state value of 10 N. When I: lOsthe manipulator is commanded to move along the surface of the table while maintaining contact with a constant force of 10 N with the table surface. At the end of 12 s the desired force is ramped to 13 N within the interval of l s. Note that the manipulator is still moving along the table surface at that time. The position and velocity error in the z direction is quiet large after impact, but the z direction is under force} control thus the position and velocity error in the z direction is filtered out by the selection matrix. The force error in the z direction is also very small for steady state values of the desired force. The dynamic response of the force control loop is not very good, mainly due to the high gain nature of the feedback system, and it needs proper tuning. 68 (a) x position (b) y position (c) 2 position 0.6 , ' < 0.6 > . fj « 0.5 Ar—-—+—-— -_ - 0.5 A. :04 VOA ‘——. 03 i 03 03 . 0.2.. . _ .... 01,. _ ., , .. .... 0.2..“ o.1 A A A 0.1 A A A 0.1 A ¥ A o 5 io 15 20 o s 10 15 20 o s 10 15 20 Mace) (Induce) induce) 1 x 10" (d) 0 angle l 3 1o" (c) A angle 1 x 10" (f) T angle 05» . ‘ j §,. -J- 05. .J 05 5 3-» .« i o ' ' . E o —~’—fi1/\r—-—— i 0 A—-—-———\//\—— 6 . . < F7 . ~ . .o 5 .05 ..................... .0 5 .1 A A A .1 A A A .1 A A A o 5 10 is 20 o s 10 15 20 0 5 10 15 20 Mm) Catches) Mm) Figure 5.9: Actual and desired positions for task space hybrid force/position control. (a) x velocity (b) y velocity (c) z velocity 0.1 - . . 0.1 . ~ . 0.1 f . if 0 0 g 0\ h—w— Z t ‘9 y ‘ . V 33' i : i -0.1 g A A' -o.r A A A -0.1 A A A 0 5 10 15 20 0 5 10 15 20 0 5 10 15 20 ”(no Mm) bob“) 1 x 10" (d) 0 angle velocity ‘ x 10" (e) A we velocity 1 x 10" (f) T Inslc velocity 05,. 3. .. .. 0.3. . ' ..... W Oct-I’m) o Mayne) O 1100'“) 0.5» . é ’:‘ - 0.5.. 0 5 10 15 20 0 5 10 15 20 0 5 10 15 20 Mm) Mm) Mm) Figure 5.10: Actual and desired velocities for task space hybrid force/position control. 69 , to" (a) at position error 1 - 0.5 r 3 o s/\— I! -0.5 ~~ ~ 4 i i . 0 5 10 15 20 inches) 1,10" (d)Oangleerror 0.5 > i 0 b’ -0.5 r -! J; A A 0 5 10 15 20 induce) , 104(1)) y position error 1 . - 0.5 > a VA K 0 0.5 .1 4 A A 0 5 10 15 induce) l , 10" (e) A angle error 0.5 A i 7 0 5 10 15 inches) x 10" (c) 2 position error 1 - . - O 5 10 15 Quebec) 1:10" (DTangleerror 0.5 ' 10 {s (undue) Figure 5.11: Error in position for task space hybrid force/position control. , 103(0) x velocity error 1 f - - 0.5 ... ~ x(rn) o 0.5 . ' o 5 10 15 into“) 20 0.5 > 1 x io“(d) 0 angle velocity error O 5 10 15 into“) 20 , 10~3(b) y velocity error 1 : . 11m) 0.5L .. A: ° yr -o.5 .1 x . ‘. 0 5 10 15 Mm) 0.5 . 1 ; m"(e) A angle velocity error (c) z velocity error 0.01 0 i 0.01 # A A 0 s to 13 20 til-(m) l“0"“)1‘arglevelocityerror O 5 10 15 Mm) 20 Figure 5.12: Error in velocity for task space hybrid force/position control. 70 0.01 0.005 - {100 O 0005 - «0.01 0 Figure 5.13: The actual and desired force profiles for the task space hybrid force/position 0.01 0.005 -- 1 ~ -0.005 . «0.01 Figure 5.14: The error in the force profiles for the task space hybrid force/position (a) Force along x 4 10 (undue) -10 .15 O 15 0:) Force along y 0.01 f 0.005 - E o -0.005 r -0.01 ‘ ‘ 10 15 inches) _— (c) Force along 1 f ...... . (a) Force error along x 10 finch“) 15 10 15 furnace) controller. (1)) Force error along y 0.01 0.1K” .. -0.oos . -0.0 1 0 (c) Force error along 2 . L— u 4 L 10 15 “(000) controller. 71 10 15 finch“) The same experiment was carried out for the tool space controller. But in this case the robot was initialized such that the end-effector was at an angle with the table surface. The initial location of the end-effector in the task space was set to x = 0.4m, y = 0.5 m and z = 0.2 m. The initial task space orientations were set to 0 =-1.6 rad, A = 0.1 rad and T = -O.l rad. The robot was then commanded to move along the z direction under position control till impact was detected. After detection of impact, the z- axis of the tool frame was switched to force control. Similar to the case of task space control, at the end of 10 s the robot was commanded to move to the task space location x = 0.6 m and y = 0.35 m. At the end of 12 s the desired interaction force was ramped to 13 N within 1 second. The experiment was run for 20 s. Figure 5.15 (a) — (f) show the actual and desired Cartesian task space positions and orientations of the manipulator under tool space hybrid force/position control. Figure 5.16 (a) — (f) show the actual and desired velocities in the task space. Figure 5.17 (a) - (f) depict the error in the task space position and Figure 5.18 (a) — (1‘) shows the task space velocity error. In this simulation, since the constraint surface is aligned along the task space axes, the errors in the task space position and velocity for the tool space hybrid force/position controller tend to zero. Also these plots are given in order to compare and contrast the performance of the task space and tool space hybrid force/position controllers. Further the errors in position and velocity expressed in the tool space, which are actually used to drive the tool space controller, are shown in Figure 5.19 and Figure 5.20. In the Figures 5.19 and 5.20, note that the position and velocity errors in the x, y, 0, A and T directions asymptotically tend to zero. 72 The interaction forces expressed in the Cartesian task space are shown in Figure 5.21 (a) - (c). Note that interaction force is directed along only the task space 2 direction, which is perpendicular to the table surface. Figure 5.22 (a) - (c) show the interaction forces measured in the tool space frame of reference. The errors in interaction forces expressed in the tool space reference frame are shown in the Figure 5.23 (a) — (c). The force error in the z direction of the tool space reference frame is used to drive the force controller. Note that the measured interaction force is along the negative z-axis in the task reference frame while it is along the positive z-axis when expressed in the tool reference frame. This is due to the orientation of the tool reference frame. For this simulation the z-axis of the tool reference frame is directed along the negative 2 direction of the task frame. For this simulation the end-effector was initialized so that it came into contact with the table surface at an angle with the table surface and not perpendicular to the table surface. 80 initially the end-effector is not aligned with the direction of the interaction force. Thus initially, some force transients are measured along the tool space x and y axes. But the orientation control on the manipulator is structured in such a fashion that the z axis of the end-effector aligns with the direction of the sensed interaction force. Hence the force measured along the tool space x and y axes tends to zero as the end- effector z axis aligns with the direction of the interaction force. 73 (a) x position (b) y position (c) 2 position 06 p _— 0_6 , ..... 1. . . -. . . . ... 0'6 4 0 s / 0 s ' A_ - 0.5 » §,..__._.__ aw ~ :\ an 0.3,. . . ..a 03.. 4 03.... .... 0.2» , A « 0.2» ~- .. A1 0.2 0.1 A A A 0.1 A A A 0 r A A A 0 s 10 15 20 0 s 10 15 20 o s 10 15 20 W1- ua) ml ('0‘) h ‘0'.) (d) 0 angle (e) A male (0 1‘ Insle 01.563 V v7 0.1 r r v 0.02 r V, r '1-37 ’ 0.0a » 0 A 4.579 » 0.05 -o.02 -1.58 ' 1» 3°“ 3°“ 1.505 , . 0.02 ' ‘ ' ' j - . * -0.06 .. ' ‘ ' f 1 -1.59 ., ‘ . . ., .. ‘ . j .159, . . . '. .f . . . ., 0 . , A. 0.“ t . -1.6 A A A .0.02 A A A -0.1 A A A 0 s 10 15 20 0 s 10 15 20 o s 10 15 2o Wm) Man) “('00) Figure 5.15: The actual and desired positions, expressed in the task space, for the tool space hybrid force/position controller. (a) x velocity (b) y velocity (c) z velocity 0.08 . 0. 0.15 0.1 ., 0.“ 0.05 -' . .04 0 ' . . f 0'02 A A A A 0-2 J AL A . 0 5 10 15 20 5 10 15 20 ”(000) find-ca) (e) A argle velocity 0 2 (f) '1‘ angle velocity (d) 0 angle velocity -0.01 r . r g ‘0‘; i A j A 0.05 L A i i 0 5 10 1 5 20 0 5 10 15 20 0 5 10 15 N Mm) find...) Hm) Figure 5.16: The actual and desired velocities, expressed in the task space, for the tool space hybrid force/position controller. 74 x 10" (I) X position error 5 , 10" (b) y position error (c) 2 position error + 0.01 0 ’ O r V 0.00: . .5 . .5 ’E i‘ A ‘ ~.-.« : 5 ° » .10 -10 r J 0.005 > - -15 15 » .20 A A .20 A A A 0.01 A A A 0 s 10 15 20 o 5 10 1s 20 5 10 15 20 fits-(us) 8110000) We“) (d)0urgleerror (OAsngleerror @Tlngleerror 0.03 - - 0.02 Y - 0.1 - . . 0.025 0 . T 0.00 0.02 0.02 0.06 .015 30.0. ion. 5' 0.01 < F": 0.06 0.02 » 0.005 > o . 0.“ 0 -0.005 A A A 0.1 A A A 0.02 A 4 A 0 5 10 15 20 0 s 10 15 20 0 s 10 15 20 8110000) ”(“0 Wm) Figure 5.17: The position errors expressed in the task space for the tool space hybrid (a) x velocity error force/position controller. (b) y velocity error (c) z velocity error 0.02 0.01 0 k 0 {L A 30.02 ~-< 30.01 ‘6 ‘9 E004 J .02 4 ... I >~ 0.06 ~- 0.00 7 0.00 A A A 0.04 A A A A A A 0 s 10 15 20 0 s 10 15 s 10 15 20 flushes) MI“) MN.) 0 01 (d) 0 angle velocity error 2 (e) A style velocity error 0 05 (t) '1‘ side velocity error . 0.15 « I' 1 . . .4 fl. . H, g : J — —— 0.15 ~~~~~~~ i 0.06 A A A 0.06 A A A 03 A A A 0 s 10 15 20 0 s 10 15 20 s 10 1s 20 “(000) use.) into“) Figure 5.18: The errors in velocity, expressed in the task space, for the tool space hybrid force/position controller. 75 (a) x position error (b) y position error (c) 2 position error 0.01 0.01 0.01 01]”. 0"!” .. ., . . I ....... . o_m5,...,...l,. , .. .... '3 ’a‘ . \__.__. M...” ’E L . w ° /’ ‘5: ° ‘0’ ° - . -0.N5 -0.005 -0.005 0.01 A 'A A 0.01 A A 0.01 0 5 10 15 5 10 15 0 finds“) fondue) (d)OIngIeerror (e)A5.ngleerror 0.01 ‘ T V V V 0.01 0.“)5 ,. mm, . .... [A . i 0 +1 \~____ ‘3 0 6’ F’ .0.” 41m: dam, .. .4 0.01 A A A 0.01 A A A 0.01 A A A 0 5 10 15 5 10 15 0 5 10 15 20 time(5ee) MI“) W0“) Figure 5.19: Position errors expressed in the Cartesian tool space for the tool space (a) x velocity error (b) y velocity error hybrid force/position controller. (0) z velocity error 0.01 0.01 0.01 0 0.005 ~ . 0.005 ? A 3 § )0 N 0.005 * ,Qa” ,. , 0.01 A A A 0.01 A A A 0.01 A A A 0 5 10 15 o 5 10 15 0 5 10 15 20 M500) 7.10000) ”(500) (d) 0 angle velocity error (e) A angle velocity error (1') T We velocity error 0.01 —.— . . . . . .01 . . . 0.005 » - 0.005 < 0.005 - o < r— 0005 » ~ 0.005 ~ < 0.005 »- « 0.01 i A A 0.01 A A A 0.01 A i A 0 5 10 15 o 5 10 15 0 5 10 15 20 Mm) Mm) induce) Figure 5.20: Velocity errors expressed in the Cartesian tool space for the tool space hybrid force/position controller. 76 (a) Force along x (b) Force along y 0.01 0.01 v 0.005» .. -» - < 0.005» 2 0 ——--~ “.1- E o » J: -0.005 t ‘ t < 0.“)5 i 0.01 A A A 0.01 A A A 0 5 10 15 20 0 5 lo 15 20 induce) ”(“0 (c) Force along 2 § ~15 O 5 10 l 5 20 finch“) Figure 5.21: The actual and desired forces expressed in the task space for the tool space hybrid force/position controller. (a) Force along tool it (b) Force along tool y l - - 0.4 0 8 . 4 0 2 0.6 4 30.4 .15 . 4 0.2 o __ A -0.2 l L O 5 10 15 20 0 5 10 15 20 find-0e) Mae) (c) Force along tool 2 4 - J 2 - .1 o A A A 0 5 10 l 5 20 M 500) Figure 5.22: The actual and desired forces expressed in the tool space frame of reference for the tool space hybrid force/position controller. 77 (a) Force error along x (b) Force error along y V‘.‘ — “_" ' 0.8 0.6 2.. 0.2 -0.8 0 .1 L A A A A 0 5 10 15 20 0 5 10 15 (undue) induce) (c) Force error along 2 10 > 8 4 6 ...... g . 2 . o rm, 0 5 10 15 20 Figure 5.23: The force error expressed in the tool space frame of reference for the tool ("Inca“) space hybrid force/position controller. 78 5.4 Tracking a Curve in Two Dimensions When an object is completely unknown, no kind of task can be specified on it: some rough a priori knowledge is necessary. In this experiment a tool space hybrid force/position controller was simulated for the task of tracking a flat object whose contour is not known, but the plane in which the contour lies is known. This is similar to the problem defined in [30]. The surface to be tracked is simulated as a parabolic cylinder with its axis along the y axis of the task space. In the x-z plane the profile of the cylinder is a parabola. The equation of the parabola is: z = 6x2 — 3x Note that the shape of the surface is not input to the controller directly. The controller uses the direction of the sensed interaction force to gauge the direction of the surface normal. Figure 5.24 shows the parabolic surface to be tracked. The robot is initialized to a task space position such that x = 0.0 m, y = 0.2 m and z = 0.0 m. The orientation of the end-effector is set at 0 = —% , A = 0 and T = 0. Thus, in the initial position the robot is pointing along the negative task space z—axis. The robot is commanded to move along the negative task space z-axis. By moving in this direction the end-effector comes into contact with the surface and interaction forces are generated. The controller uses the direction of the interaction force to gauge the surface normal and orients the robot along the surface normal. The force control loop regulates the interaction force along the tool space z-axis to the desired set point value, i.e., 10 N. 79 At the end of 10 seconds the robot is commanded to move, following a straight- line path, to the task space position x = 0.25 m, y = 0.2 m and z = -0.375 m. The desired force command along the z-axis of the tool space is maintained at 10 N. The robot follows the parabolic contour to the set reference point. When the robot reaches the desired set point, it is again commanded to move along a straight-line path to the task space position x = 0.5 m, y = 0.2 m and z = 0 m. The desired force command along the z- axis of the tool space is maintained at 10 N. Figure 5.25 (a) — (i) show the actual positions and orientations of the manipulator expressed in the task space frame of reference. Note that the position in the x direction is continuously increasing with time while the position in the z direction first decreases with time, when the robot is following the downgrade of the parabola, and then increases with time when the manipulator is following the upslope of the parabolic surface. The position in the y direction remains constant at 0 m because the robot is not commanded to move along the y direction. Also note that the orientation angles 0 and T maintain a near constant value at -% and 0 respectively. The A angle reflects the changes of the attitude of the end-effector with time. Also note the large initialization error in the A angle due to the initialization error in the orientation of the end-effector. Figure 5.26 (a) - (c) show the interaction forces expressed in the task space frame of reference. Note that, form figure 5.26 (b), the force along the y direction is constantly at 0. When the manipulator is initially pointing along the negative task space x-axis the interaction force is regulated to approximately 10 N in that direction while it has a very small value along the task space z-axis. As the manipulator starts moving along the 80 surface, the orientation of the end-effector changes and so does the magnitude of the task space x and 2 components of the interaction force. When the manipulator is at the nadir of the surface, at approximately t = 40 s, the surface normal is directed along the positive task space z-axis. Note that at this time the force component along the task space x direction is zero while the force component along the task space 2 direction is regulated to 10 N. Figure 5.27 (a) - (c) shows the actual and desired interaction force expressed in the tool space frame of reference. Form the figure note that the 2 component of the interaction. force is regulated at 10 N whereas the x and y components are regulated to approximately zero values. Figure 5.28 shows the profile, in the task space x-z plane, of the location of the end effector of the manipulator. The dashed line represents the location of the actual surface. We see that the profile traced by the end-effector is exactly parallel to the surface profile. In order to generate an interaction force the surface needs to be deformed. This deformation gives rise to the interaction forces upon contact of the end- effector with the environment [52]. Note that the two curves are nearly exactly parallel. This implies that the end-effector maintains a constant angle with the surface normal, which in this case is regulated to 0 rad. In order to see if the end-effector is perpendicular to the surface we need to compare the surface normal vector with the z-axis of the tool space. Figure 5.29 plots the x, y and 2 components of the end-effector z—axis (solid line) and the surface normal vector (dashed line). The components of the end-effector z-axis can be calculated from the orientation angles OAT using the following formulae: 81 ax = SOSACT —COST ay = _COSACT _SOST a: =—CACT From the figure it can be seen that the orientation of the end-effector coincides nearly exactly with the surface normal. Hence, it can be concluded that the end-effector remains perpendicular to the surface. The Surface to be Tracked Figure 5.24: The cylindrical parabolic surface to be tracked. 82 (a) 1: position 0.6 a 0.5 0.1 0.5 0.4 ..0-3 ~ - - - - , A g - f—z a “0.2 . . , / . . .. K 0.1 / » O W 0.1 A A A 20 40 50 fiancee) (d) 0 male 4.4 . . 4.45 » g .15. g » 0. 4.55 - -l.6 1 L 1 ' 1 1 1 1 1 o 20 40 60 0 20 40 60 0 20 40 60 mm) mm) mu.) Figure 5.25: The actual positions and orientations of the manipulator. 10 (a) Force along x 1 (b) Force along y 5 0.5 + g o g o .5 < 0.5 -< .10 1 i 1 1 1 1 4 1 r 1 01020304050607!) 01020300506070 induce) hence) (0) Force along 2 §‘ . . 10 30 .T ,5 .3 70 Figure 5.26: The interaction force expressed in the task space frame of reference. (b) y position (c) 2 position $110000) 83 (a) Force along x (b) Force along y 5 2.5 t ' , A ' ‘ ‘ ' ' 4 2.5 * J3. _2.5,, ;, ..... _2.5, . . .,,...1 _5 L A A .5 1 1' 1 0 20 40 60 0 20 40 60 timekee) $34.“) (c) Force along 2 20 40 60 finebec) Figure 5.27: The interaction force expressed in the tool space frame of reference. 0.05 - 0.05 [ TaskSpaceUm) ,5 0.25 F 0.35 >- Figure 5.28: The surface profile and the actual profile of the end-effector location in the x-z plane in the task space. 84 (a) x component (a) y component 1 fi 0.] f.— 0.5 > 0.05 0° ..-,....——/ 3‘0 -0.5 p » / -0.05 > i ‘ / : l l . , : \ I- . -1 ”—7 A A A A A 0.1 A A A A 0 10 20 30 40 50 60 70 0 10 20 30 40 50 60 ‘70 tirne(ree) “4““) (a) 2 component 0.: Y Y *7 Y Y 7"_'\ l -0.4 rr - l l l «0.5 l I -0.6 3* I -0.7 if I 0.8; . -O.9 r -1 A L ¥ .4 0 10 20 30 40 70 tin¢(5ee) Figure 5.29: The task space x, y and 2 components of the surface normal and the z-axis of the end-effector frame 85 The same simulation was carried out with the desired force being time varying, rather than a constant value as was done in the previous experiment. The value of the desired force was taken to be a time varying sinusoid whose equation is given by: '0“ fz =10+sin(0.2t) Figure 5.30 (a) —(f) show the actual location of the end-effector for this experiment. Figure 5.31 (a) — (c) shows the actual interaction force expressed in the task space frame of reference. This figure is similar to Figure 5.26 except for the superimposed sinusoids on the x and the 2 component of the force profile. Figure 5.32 (a) — (c) shows the interaction force expressed in the tool space reference frame. Again this figure is similar to the Figure 5.27 except for the superimposed sinusoid along the z direction of the end- effector frame. 86 (a) x position 0.6 0.5 0.5 . . . .; .4 0.45 0.4 [ l 5 0.4 0.35 A 0.3 x I - 1 A 5 PJ i 0.3 I. / , 0.1 r -- / A - < 0.25 0.1 > / 0 2 7/, ° W 0.15 -0.1 ‘ ‘ ‘ 0.1 0 20 40 60 ”(000) (d) 0 angle 4.4 w 1.5 l 4.45 r 0.5 i -l.5 > i 0 6’ 2' -0.5 .155 . ..4 -l 4.6 A * . -l.5 O 20 40 60 63110000) Figure 5.30: The actual position of the end-effector expressed in the task space frame of (b) y position (e) A angle L A 6.90000) (c) 2 position reference. (8) Force alorgx (0) Force alorgy 15 - f 1 . - 10> ~1 //\,,< 05 5 ./ . . .0 L I , .,. _ /, ' I / .10 —\.// .15 A A A A .r A A A A A 0102050050001: 0101030405000» boo“) 6110000) (c)Forcealongz 0 i y - Y e .1»- 4r g4. - 4 4 .12 % A A A P 0 10 10 Figure 31: The interaction force expressed in the task space frame of reference. 87 (a) Force along x (b) Force along y 5 ' .V fif ‘ V 5 ' T ‘ fi 2.5» , - . - . < 2.5» e o A g o a . .15.. .‘ ., . . . 1.5.... .15.... W.;. 5 ........ .. .5 a A A A L 1 .5 A a g A 1 A 0 10 20 30 40 50 60 7D 0 10 20 30 40 50 60 70 induce) Wm) (c)Force alongz 14 . . v v 10»— 8» ‘r 2 ' -r o L A AL A 0 10 20 30 40 50 60 70 Figure 5.32: The interaction force expressed in the tool space frame of reference. 88 5.5 Tracking 3 Curve in Three-Dimensions Extending the concept of two dimensional curve tracking further, the tool space hybrid force position controller was simulated to track a three dimensional contour. The contour was taken as a sector of a hollow sphere having the equation as: x2 +y2 +22 =O.52. The radius of the sphere is 0.5 m and it is centered at the origin of the task space. The sphere is hollow and the robot is assumed to be inside the sphere and the task is to track the inner surface of the sphere form within. For this simulation, the robot comes into contact with the sphere at an angle with the local surface normal. The initial position of the robot in the task space is x = 0 m, y = 0.4, 2 = -0.3, 0 = ~512— rad, A = 0.321751 rad, and T = 0.901832 rad. At the end of 10 seconds the manipulator is commanded to move, following a straight-line path, to the task space position x = 0.4 m, y = 0.25 m and z = - 0.1658 m. The desired force command along the z-axis of the tool space is maintained at 10 N. The robot follows the spherical contour to the set reference point. Figure 5.33 shows a 3 dimensional plot of a part the spherical surface that the robot is supposed to track. Figure 5.34 (a) — (i) show the desired and actual position and orientation of the end-effector while it is moving along the surface of the sphere. The solid line represents the actual position and the dashed line represents the desired position. Note that the desired positions are generated as a straight-line form the initial point to the final reference point without any knowledge about the shape of the surface. 89 Figure 5.35 (a) — (c) show the plots of the interaction forces expressed in the task space frame of reference. Figure 5.36 (a) — (c) show the plots of the actual and desired x, y and 2 components of the interaction forces expressed in the tool space frame of reference. Note that the components of the interaction forces along the x and y axes are regulated to zero while the component of the interaction force along the z axis of the tool space is regulated to 10 N. In this case, all the points on the surface of the sphere satisfy the equation x2 +y2 +2:2 =0.25. Now, for the end-effector to be in contact with the sphere surface, the end-effector x, y and z coordinates must satisfy the following equation x2+y2 +22 20.25. In order to prove that the tip of the end-effector is in contact with the surface of the sphere, we need to use a metric that would indicate whether the tool tip is inside or Hence we can use m = x2 + y2 + z2 as a metric for this example. Figure 5.37 plots the equation of the above metric. The dashed line is plotted at m = 0.25 m2. We see that the two lines are parallel, implying that the end-effector maintains a constant angle with the surface normal, which in this case is 0 rad. In order to see if the end—effector is perpendicular to the surface we need to compare the surface normal vector with the z-axis of the tool space. Figure 5.38 (a) — (c) plot the x, y and 2 components of the end-effector z-axis (solid line) and the surface normal vector (dashed line). Note that the two curves nearly coincide. This implies that the end-effector z-axis remains perpendicular to the surface of the sphere. Thus the interaction force is directed along the end-effector z-axis and can be completely regulated. The Spherical Surface to be Tracked "l1 Figure 5.33: The spherical surface that is to be tracked. 91 (a) x position 10 furnace) (d) 0 angle -1.4 fl v 1.45 , < ‘i .15 . . 25’ 4.55 l , 4.60H' 10 20304050 whee) 0.45 (b) y position 0.35 r ~ 3 E 0.3,... . .. 0.25» . 10 20 30 4O M000) 50 0.5 (c) 2 position 4‘ L L 10 10 30 40 50 5314000) Figure 5.34: The actual and desired positions and orientations expressed in the task space (a) Force along x frame of reference. (b) Force along y 10 20 30 .10 0 10 (0) Force alum z A 12) 20 30 M000) $110000) Figure 5.35: The actual interaction force expressed in the task space frame of reference. 92 1.5 0.5 &(N) o -0.5 . -1.5 (a) Force along x (b) Force along y -0.5 > A A ‘ L 4 10 20 30 4O 50 0 10 20 30 4O tinebee) finds“) (c) Force along 2 12 - - lo _ l g 6 i - 4 4 A 4 2 , ........ a o A + a A 0 10 20 3O 40 50 $110000) 50 Figure 5.36: The actual and desired interaction forces expressed in the tool space frame 0.275 0.27 0.265 metric m(mz) 0.255 0.25 0.245 0.24 of reference. I 1 1 T I I 1 l fir I . -—r 1 1 L 1 1 1 l 1 1 5 10 15 2O 25 30 35 40 45 time(sec) Figure 5.37: The metric m = x2 + y2 + 22. 93 (a) x component (a) y component - - # 0.9 a emcee) finch“) (a) 2 component 50 find-cc) Figure 5.38: The task space x, y and 2 components of the surface normal and the z—axis of the end-effector frame 94 Chapter 6 6.1 Summary In Chapter 3 the concept of control of robot manipulators in the robot tool space was introduced. From the simulation studies in Section 5.2 it was seen that it yielded similar results as a task space controller for tasks involving position control. In Chapter 4 hybrid force/position control was discussed. The inadequacy of the task space hybrid force/position controller for controlling the force in any arbitrary direction in the task space was pointed out. It was also noted that such a control strategy was not robust to interaction forces in arbitrary directions and the build up of the interaction forces could result in damage to the manipulator or the environment. To overcome this limitation of the task space hybrid control strategy, the concept of hybrid force/position control in the robot tool space was introduced in Section 4.3. Stability analysis for robot tool space control was presented in Section 4.4. Stability analysis for the tool space hybrid force/position controller was also presented in Section 4.4. Simulation studies for the tool space hybrid force/position controller are presented in Chapter 5. In Section 5.3 the task space and tool space strategies for hybrid force/position control were compared and contrasted. The task to be performed was tracking a horizontal surface while simultaneously controlling the normal interaction force with the surface. Similar results were obtained for both the task space and tool space control strategies. Initializing the manipulator at an angle with the surface normal 95 for the tool space controller showed that the tool space control strategy was robust to errors in initialization of the manipulator. Further in Section 5.3 the task of tracking a two-dimensional contour in a known plane was simulated. The contour to be tracked was a cylindrical parabola with its axis along the task space y-axis. The robot was initialized so that the end-effector was aligned with the task space z-axis. The robot was able to successfully track the contour without any prior knowledge about the contour, by just using the measured normal interaction force. In Section 5.4 the tool space hybrid force/position control approach was used to track an unknown three-dimensional contour. 6.2 Conclusions and Future Work The strategy of tool space hybrid force/position control opens up the possibility for controlling the interaction forces that are constantly changing directions. Thus, using this strategy the manipulator can interact with poorly structured and unknown environments. Further, using this strategy, applications such as surface tracking, grinding, polishing, milling and deburring, which require the manipulator to come in contact with the workpiece can now be carried out on workpieces that are not pre- specified. This would in turn affect productivity by eliminating the down time spent in extensive path planning and reprogramming of the manipulator when dealing with different workpieces. Also, using this strategy, the force exactly along the normal to the surface can be regulated. This was not possible using the task space hybrid force/position controller that could regulate only the component of the force directed along the task space axes where force control had been planned. Further, unlike the task space hybrid 96 force/position control, this strategy is robust to impact along directions where motion control has been planned. Though preliminary experimental results are provided in Chapter 5, many improvements and modifications need to be carried out on the proposed controller. First the restriction of the assumption of a frictionless contact needs to be tackled. Adaptive algorithms that can estimate the friction coefficients for surfaces need to be implemented. This would abet the controller to estimate the exact direction of the tangent even in the presence of friction. Also for improved tracking performance better force control algorithms ([73]) need to be implemented. Further, as mentioned in [71] and [72], for implementation in cutting and polishing operations, it is required to regulate the interaction force at a certain angle with the surface normal. The tool space hybrid controller could be modified to control the interaction forces at a specific angle to the contact surface. The current proposed strategy does not take into account the task of contact transition. It is assumed that the manipulator comes into contact with the environment at a extremely low velocities and thus the need for contact transition control is not realized. But for practical implementation of this controller, a scheme for contact transition needs to be adopted. Many schemes for contact transition have been proposed in literature ([25], [53]). The possibility of adopting them for the proposed controller needs to be looked into. In conclusion, this thesis presents a new strategy for controlling constrained motion of manipulators in contact with environments whose shape is not known a priori. It has been shown experimentally that hybrid force/position control in the robot tool 97 space can be successfully used to control constrained motion of manipulators in contact with unknown environments. 98 Appendix A Orientation Singularity Avoidance At the joint level control design, the measurements of joint angles are used directly in the feedback loop. Thus no forward kinematics equations need to be solved. In the task level control design, the joint angles and joint velocities have to be converted to positions and velocities in Cartesian space, which is a three-dimensional Euclidean space, through forward kinematics and the Jacobian matrix. For too] level control design also, the position, orientation and velocity of the end-effector are expressed in a Cartesian frame attached to the end-effector and they are derived from the joint angles and velocities through forward kinematics, coordinate transformations and the tool Jacobian matrix. The output position of the end-effector consists of three components for linear position and three components for orientation in Cartesian space. It is known that the hand frame of the robot is related to a reference coordinate frame by a homogenous transformation T: -nx SX ax px- _ n s a p n, sy ay py —[0 0 0 1L.- n. s. a. P. _o o o 1_ 99 where ’n’ is the normal vector of the hand, ’3’ is the sliding vector of the hand, ’a’ is the approach vector of the hand and ’ p’ is the position vector of the hand which points from the origin of the base coordinate system to the origin of the hand coordinate system. Figure A.1 shows the 4 vectors n, s, a and p. [n s a] form arotation matrix which represents the orientation of the end-effector and ’ p’ is the translation vector representing the position of the end-effector. X(rr) Tool Frame 2(a) . Figure A.l: The n, s, a and p vectors of the tool coordinate frame Although the rotation matrix provides a convenient way to describe the orientation, nine elements are required to completely describe the orientation of a rotating rigid body. But the orientation of a rigid body in space can be described by using only three elements. Thus the orientation matrix representation introduces redundancy in the orientation representation. Moreover, the end-effector has only six degrees of freedom, so only six variables can be individually controlled. Hence instead of using three vectors in the rotation matrix, three angles can be used to represent and control the orientation. 100 Figure A.2 Coordinate Assignments for the PUMA 560 101 Furthermore the elements in the rotation matrix in the transformation must be expressible by three angles. The output velocity of the end-effector for both task level control and tool level control, consist of three linear velocity components and three angular velocity components in Cartesian space. The task space velocity is related to the joint velocities by the J acobian matrix. v [ J = J ((1)4. ‘0 task The tool J acobian, 3(q), relates the Cartesian tool space velocity and the joint velocities. [v] = 301k]. a) tool In the above equations 4 is the joint velocity, v is the linear velocity in Cartesian space and w is the angular velocity in Cartesian space. Here we note that the Cartesian space is a three-dimensional Euclidean space. The linear velocity and acceleration vectors in a Euclidean space are the first and second time-derivatives of the position vector ’ p’. It is known, however, that the angular velocity, (0 , is not the derivative of any intuitive orientation representation. Therefore the representation of the orientation requires extra consideration. 102 A.1 The Orientation Representation of Unimation The orientation representation for a Unimation robot is quiet an intuitive one, called OAT-angles, where 0 represents Orientation, A represents Altitude and T represents Tool angles. They are defined as: 06 (—7z,71): the angle between world Y axis and the projection of the vector ’a’ on the world XY plane Ae —%,%): the angle formed between the vector ’a’ and a plane parallel to the world XY plane. T e (—7z,72): the angle formed between the vector ’5’ and a plane parallel to the world XY plane. The advantage of this definition is in case of visualization. However, there is no direct relationship between the rotation matrix and the angles. Hence it cannot be used in task or tool level orientation control. A.2 Euler Angles and Euler OAT Euler angles describe the orientation of a rigid body with respect to a fixed reference frame. It corresponds to a sequence of rotations, resulting in the rotation matrix [n s a]. There are many types of Euler angles. One type is defined as follows: 1. A rotation of ¢ angle about the Z axis. 103 2. A rotation of 6 angle about the rotated U axis. 3. Finally a rotation of 1,11 angle about he rotated W axis. The resultant Eulerian rotation matrix is: Row = Rz.¢RU.9RW,V - [58] used one type of Euler angles for the PUMA 560 robot arm and considered it as the OAT solution of Unimation. However, it has been proved to be incorrect both by direct derivation and experimentally. The definition given by [58] is actually a different definition. The definitions of 0 and A are the same, but the definition of T is different. The Unimation OAT is not Euler angles and does not have a relation to the hand transformation matrix. Let’s call Fu’s angles Euler OAT to distinguish between the two definitions. The relationship between the hand transformation and the Euler OAT angles is given by: n,[ s,r ax 0 1 0 ny sy ay = R10! 0 0 -1 RM. RM . nz .92 a2 —1 0 0 104 .e 'e e O - A measurement at the angle formed between the HORLD Y axis and a projection of the TOOL 2 on the HORLD X! plane. A - A neaautenent o! the angle (cried between the TOOL 2 and a plane parallel to the HORLD XY plane. ‘T - A measurement of the angle Earned between the TOOL Y and a plane parallel to the HORLD XY plane. Figure A3: The Unimation OAT definition. 105 The tool coordinate system is aligned with the base coordinate system of the robot initially where 0 = A = T = 0. Figure A.x and Figure A.y represent the Unimation OAT definition and the Euler OAT definitions respectively. The difference between the two definitions is evident from the figures. The kinematic characterization of the Puma 560 robot derived in [68] is based on the Euler OAT in [58], though originally it is considered as Unimation OAT according to [58]. The orientation control used in our Laboratory is based on the derivation of the OAT Jacobian and other relations presented in [60]. Fortunately, the misunderstanding does not affect our orientation control. However, it is easy to see that when the tool ’a’ is perpendicular to the XY plane, the Euler OAT has no definition. Thus when he manipulator is perpendicular to the XY plane the Euler OAT cannot be defined. Also for tool level control, the actual position of the end-effector is always aligned with the tool space axis. Thus, irrespective of the position of the manipulator, the end-effector ’a’ is always perpendicular to the tool XY plane. Thus Euler OAT has no definition for the actual position of the end-effector in the tool frame of reference. A way needs to be found to overcome this singularity. 106 :13 ‘-1L—-------- 1 ” XS """"""" ------r ‘--------- d’ d --- ‘s n \. :11" (P Figure AA: The Euler OAT angles system 107 A.3 Singularity Avoidance From the above discussion it is clear that the occurrence of the singularity mentioned in the previous section depends on the definition of the set of angles. The straightforward solution to this problem would be to define another set of angles which are not affected by this singularity. However, this solution is not realistic and it is time consuming because of the need for re-derivation of all the characteristics of the PUMA 560 robot arm kinematics related to the orientation definitions, and their re- implementations. Furthermore, singularities may occur in other directions, which might also be used frequently in future experiments. The best solution for this problem is to relocate the singularity to a less frequently used orientation for this particular problem. To see this let’s define: O 0 l [n' s' a']=[n s a]0 —1 0 =[a -s n]. l 0 0 Figure A.5 shows the definition of [n’ s’ a'] with respect to the original [n s a]. After some elementary manipulations, the following can be arrived at: T; = atan2(s; ,-n;) = atanZ(-sz ,-az) A: = atan2(-a; ,-n;CT,' + szSTc’) = atanZ(-nz ,azCT: - szSTe’) 0: = atan2(n; ST: + s;CT¢',n;ST¢' + siCTe') = atan2(ayST¢' - syCTe',axST¢' - stTc') 108 ,where ST; and CT,’ are the sine and cosine of the tool angle T; respectively. It is easy to see that the singularity is relocated to an orientation that is not frequently used. original 3' S Figure A5: The definition of [n' s’ a'] 109 Appendix B The Tool J acobian In task level control the Jacobian matrix J is used to convert the joint velocities to the Cartesian space velocities. Similarly for tool level control the tool Jacobian matrix, S , is used to convert the joint velocities to the Cartesian tool space velocity. This appendix presents the equations used to calculate the tool Jacobian matrix for the Puma 560 robot arm. Cartesian space velocities consist of 2 components viz. the linear velocity and angular velocity. The linear velocity is the first derivative with respect to time of the linear position. However, the angular velocity allows no such intuitive definition. The task Jacobian matrix for the PUMA 560 robot arm is given in [68]. This Jacobian matrix relates the task space velocities with the joint space velocities as: . i? v . y=[ ]=[ ]=J(q)q (El) 0) a) . B.2 The Forward Kinematics The forward kinematics of a robot manipulator relate the Cartesian position and orientation of the end-effector of the manipulator with respect to the base coordinate 110 frame to the measured joint angles. Form [68] the forward kinematics of the PUMA 560 manipulator are given as follows: =0=nsap . A. [0...] where the column vectors are given as: "x CI[C23 (C4CSC6 " S4S6) " stsscol' Sr[S4C5C6 + C456] n = S,[C23(C4C5C‘5 -S,,56)-Sz355C,5]+C,[S,,C5C(5 + C456] "2 -S23(C4C5C6 -S456)—C23SSC6 C,[-C,, ((3,,(3556 + 5,0, ) + 523555.1- 31 [-S,C,s6 + C4C6] S, [-C,, (C,C,s,5 + s,c,) + 523555.] + C, [—5,C,s6 + C4C6] 523(C,C,s6 + 54C, ) + C3555. 9: (a (a N ~< a ll l—’fi Cr (C23C4SS + 3236‘s) " 515455 S.+ 6.5.3. - SZ3C4S5 + C23C5 Q a S: N V H II fl s,[d,(c,,c,s, + 5,,C,) + d,s,, + .1202 + a,c,,] + C,[d,s,s, + d3] p, [(3, [d6 (C,,C,sS + s,,C5) + (1,5,, + azCz + a,C,, 1 - s, [d,s,s, + d3] d6(C23C, - s,,c,s,) + d,C,, -a,s, -a,s,, where S1 and Cl represent the sine and cosine of the joint angle ql and so on. a2,a3,ci3 ,d4 and d6 are the link parameters as described in [68]. It is important to note that the forward kinematics convert the coordinates of a point expressed in the end-effector frame to the coordinates of that same point expressed in the base frame with the help of the joint angles. The joint angles and the link 111 parameters help to express the Cartesian position and orientation of the coordinate frame attached to the 1"” link with respect to that of the (i—l)"' link. The position and orientation of the end-effector in the n'" coordinate attached to the end-effector is given by I 4,“. Thus T gives the position and orientation of the end-effector in the base coordinate frame. Let [ha]4x4 be the homogenous coordinates representing position and orientation of a point in the end-effector coordinate frame. The homogenous coordinates, [ya Lu, , of that point in the base frame are given by: [ya ]4x4 : T4x4lha14x4 Thus the forward kinematics can be used to express the position and orientation of a point expressed in the end-effector frame. B.3 Tool Space Velocity The task space and the tool space are 3 dimensional Euclidean spaces. Thus one would require 6 dimensional vectors to represent the position and velocity in these spaces. For the position vector 3 dimensions would represent linear position and the other 3 dimensions would represent orientation. For the velocity 3 dimensions would be used for representing the linear velocity and the other 3 dimensions for representing angular velocity. Let T be the homogenous transformation that relates the homogenous tool space position and orientation, [ha 1“ to the homogenous task space coordinates [ya LN. Equation B.2 gives the homogenous transformation T. The relation between the velocity 112 in the task space and the velocity in the tool space can be expressed with the help of T as [57]: F x 1 an ny nz (pxn)Jr (pxn)y (pxn)z x - y S: Sy Sz (pXS)x (pXS)x (pXS)x y a, a a xa x xa x xa I z z = , (p ) (p ) (p ) (3.3) a)" 0 0 0 n, ny nz a): (by 0 0 0 x sy sz my .d)2 -100! 0 O 0 0‘ 0" az _a)l -task _ T . Thus it = 11(6)», where It and )3 represent the velocities in tool and task spaces respectively Thus the velocity in the tool space can be derived from the velocity in the task space. For the relationship of the acceleration in the tool space to the acceleration in the task space, the above equation needs to be differentiated once with respect to time: 5‘ 5’ 5‘ d}; = ’“q’qhxs a.” + 449%“ d): (B4) (0, a), to, cm: ..Jrool .602 .. task -é)! .4 task 113 B.4 Method to Calculate the Tool J acobian In the preceding section, the relation between the task space velocity and the tool space velocity was derived. In this section we will develop the relationship between the joint space velocity and the tool space velocity, which is called the tool Jacobian. From the Equation B.l and Equation B.3, it is easy to deduce the following relation. 5 = J(q)? = A1q = Sum (13.5) where his the 6x1 vector of the tool space linear and angular velocities and j) is the 6x1 vector of the task space linear and angular velocities. 8(4) is the tool Jacobian. The expression for J (q) for the Puma 560 manipulator is given in [68] by TJ. Tarn, A.K.Bejczy et. al.. Its derivation is based on the notion of infinitesimal translations and rotations of the tool (Whitney, 1972 [74]). The tool space controller also requires time derivative of the tool Jacobian in order to decouple the tool space axes. The expression for the derivative of the tool Jacobian can be simply derived using the Equation B.5. Differentiating 3(q) once with respect to time: 3mm =2(q)J(q.q)+A‘(q.é)J(q) (3.6) The expressions for 1(q,cj) and j(q,q‘) can be easily derived by differentiating 1(q) and J (q) once with respect to time respectively. 114 [1] [2] [3] [4] [5] [6] [7] [8] REFERENCES D.E. Whitney, “Historical Perspective and State of the Art in Robot Force Control,” International Journal of Robotics Research, 6(1):3-14, Spring 1987 R. Goertz, “Fundamentals of general-purpose remote manipulators,” Nucleonics 10:36-42 R. Rothchild and R. Mann, “An EMG-controlled force sensing proportional rate elbow”, Proceedings 1966 Symposium on Biomedical Engineering. H. Raibert and J. Craig, “Hybrid position/force control of manipulators,” Transactions of the ASME, Journal of Dynamical Systems, Measurement and Control. 102:126-133, 1981. N. Hogan, “Impedance control: An approach to manipulation: Part i - Theory, Part ii — Implementation and Part iii — Applications,” Journal of Dynamical Systems, Measurement and Control, 107:1-24, March 1985. R. Colbaugh, H. Seraji and K. Glass, “Impedance control for dexterous space manipulators,” Proceedings of the 31St Conference on Decision and Control, 1992 A. Hace, K. Jezemik and S. Uran “Robust Impedance Control,” Proceedings of the 1998 IEEE International Conference on Control Applications Sept 1998. M. Mason, “Compliance and force control for computer controlled manipulators,” IEEE Transactions on Systems, Man and Cybernetics, SMC — 11(6):418-432, 1981. 115 [9] [10] [11] [12] [13] [14] [15] K Shin and C. Lee, “Compliant control of robotic manipulators with resolved acceleration,” Proceedings of 24th Conference On Decision and Control, Dec. 1985. T. Stepien, L. Sweet, M. Good and M Tomizuka, “Control of tool/workpiece contact force with application to robotic deburring,” IEEE Journal of Robotics and Automation Vol. RA-3, No. 1, Feb. 1987. R. Vlope and P. Khosla, “A theoretical and experimental investigation of impact control of manipulators,” International Journal of Robotics Research 12(4):352- 365, August 1993. R. Anderson and M. Spong, “Hybrid Impedance Control for Robot Manipulators,” IEEE Journal of Robotics and Automaiton V014 No. 5 Oct 1988. R. Fetherstone, S Thiebaut and O. Khatib, “A general contact model for dynamically decoupled force/motion Control” Proceedings IEEE International Conference on Robotics and Automation 1999. T. Yoshikawa, “Dynamic hybrid position/force control of robot manipulators - description of hand constraints and calculation of joint driving force,” IEEE Transactions on Robotics and Automation, RA -3(5):386-392, October 1987. O. Khatib, “A unified approach for motion and force control of robot manipulations. The Operational Space Formulation,” IEEE Journal of Robotics and Automation, RA — 3(1): 43-53, February 1987. 116 [l6] [17] [18] [19] [20] [21] E22] H. Kazerooni, B. Waibel and S. Kim, “On the stability of robot compliant motion control: Theory and Experiments,” Transactions of ASME, Journal of Dynamical Systems, Measurements and Control, 112:417-425, 1990. N. McClamroch and D. Wang, “Feedback stabilization and tracking of constrained robots,” IEEE Transactions on Automatic Control, 33(5): 419-426, 1988. S. Schneider and R. Cannon, “Object impedance control for cooperative manipulation: Theory and experimental results,” IEEE Transactions on Robotics and Automation, 8(3): 383-395, June 1992. T. J. Tarn, A. K. Bejczy and Ning Xi, “Hybrid position/force control of redundant robot with consideration of motor dynamics,” 13th Annual Allerton Conference on Communication, Control and Computing, pp. 702-711, 1992. R. Zhen and A. Goldenberg, “An adaptive approach to constrained robot motion control,” IEEE International Conference on Robotics and Automation Vol. 2, 1995. B. Yao and M. Tomizuka, “Robust adaptive constrained motion and force control of manipulator with guaranteed transient performance,” IEEE International Conference on Robotics and Automation, Vol. 1, 1995. L. Whitcomb, S. Arimoto and F. Ozakki, “Experiments in adaptive model-based force control,” IEEE International Conference on Robotics and Automation, 1995. 117 [23] [24] [25] [26] [27] [28] [29] D. Lokhorst and J. Mills, “Implementation of a discontinuous control law on a robot during collision with a stiff environment,” IEEE Intemational Conference on Robotics and Automation, 1990. P.Akella, V. Parra-Vega, S. Arimoto and K. Tanie, “Discontinuous model-based adaptive control for robots executing free and constrained tasks,” IEEE International Conference on Robotics and Automation, 1994. J. Hyde and M. Cutkosky, “Contact transition control: An experimental study,” IEEE International Conference on Robotics and Automation, 1993. B. Brogliato, P. Orhant and R. Lozano, “On the transition phase in robotics-part 1: Impact models and dynamics, part 2: Control,” Proceedings of the IAFC Symposium on Robot Control, Vol. 2, pp. 562-576, Italy, 1994. G. T. Marth, T. J. Tarn and A. K. Bejczy, “Stable phase transition control for robot arm motion,” IEEE International Conference on Robotics and Automation, 1995. D. Chiu and S. Lee, “Robust jump impact controller for manipulators,” Proceedings of the 1995 IEEE/RS] Intl. Conf. On Intelligent Robots and Systems, Vol.1, 1995. B. Bona and M. Indri, “Exact Decoupling of the force-position control using the operation space formulation,” Proceedings of the 1992 IEEE International Conference on Robotics and Automation, May 1992. 118 [30] [31] [32] [33] [34] [35] [36] [37] A. Fedele, A. Fiorettr, C. Manes, G. Ulivi, “On-line processing of position and force measures for contour identification and robot control,” IEEE International Conference on Robotics and Automation, 1993. H. Bruynincks and J. De Schutter, “A systematic derivation of online motion constraint identification equations for model based compliant motion,” IEEE Conference. on Robotics and Automation, 1992. F. Lange and G. Hirzinger, “Iterative self improvement of force feedback control in contour tracking,” IEEE Conference on Robotics and Automation, 1992. E. Goddard, Y. Zheng and H. Hemami, “Dynamic Hybrid velocity/force of compliant motion over globally unknown objects,” IEEE Transactions. on Robotics and Automation, 1987. C. An and J, Hollerbach, “Kinematic stability issues in force control,” IEEE International Conference on Robotics and Automation, Vol.2, pp. 897-903, 1987. C. An and J, Hollerbach, “Dynamic stability issues in force control,” IEEE International Conference on Robotics and Automation, Vol.2, pp. 897-903, 1987. W. Fisher and M. S. Mujtaba, “Sufficient stability condition for hybrid position/force control,” IEEE International Conference on Robotics and Automation, 1992. J. Wen and S. Murphy, “Stability analysis of position and force control for robot arms,” IEEE Transactions on Automatic Control, Vol. 36, No. 3, March 1991. 119 [38] [39] [40] [41] [42] [43] [44] [45] W. Townsend and J Salisbury, “The effect of coulomb friction and stiction on force control,” IEEE International Conference on Robotics and Automation, 1987 . T. Yabuta, A.J. Chona and G. Beni, “On the asymptotic stability of the hybrid position force scheme for robot manipulators,” Proceedings of IEEE Conference on Robotics and Automation, 1988. N Hogan, “On the stability of manipulators performing contact tasks,” IEEE Journal of Robotics and Automation 4(6): pp. 677-686, December 1988. T. Chung, “An inherent stability problem in Cartesian compliance and an alternative structure of compliance control,” IEEE Transactions on Robotics and Automation, 7(1), 1991. S. Eppinger and W. Seering, “Three dynamic problems in robot force control,” IEEE Transactions on Robotics and Automation, 8(6), December 1991. E. Paljug, V. Kumar, T. Sugar and X. Yun, “Some important considerations in force control implementations,” Proceedings of IEEE Conference on Robotics and Automation, pp. 1270-1275, 1992. S. Eppinger and W. Seering, “Understanding bandwidth limitations in robot force control,” Proceedings of IEEE Conference on Robotics and Automation, 1987. W. Fisher and MS. Mujtaba, “Hybrid position/force control: A correct formulation,” International Journal of Robotics Research, Vol. 11(4), August 1992. 120 [46] [47] [481 [49] [50] [51] [52] [53] [54] W. McCormic and H. Schwartz, “An investigation of impedance control for robot manipulators,” International Journal of Robotics Research, Vol. 12(5), Oct. 1993. R. Volpe and P. Khosla, “A theoretical and experimental investigation of explicit force control strategies for manipulators,” IEEE Transactions on Automatic Control, Vol. 38(11), November 1993. D. Meer and S. Rock, “Coupled system stability of flexible-object impedance control,” IEEE Conference on Robotics and Automation, pp. 1839-1845, 1995. J. Merlet, “C-surface applied to the design of a hybrid force/position robot controller,” IEEE Conference on Robotics and Automation, pp. 1055-1059, 1987. J. Duffy, “The fallacy of modern hybrid control theory that is based on ‘Orthogonal complements’ of twist and wrench spaces,” Journal of Robotic Systems, 7(2), 1990. M. Vukobratovic, “How to control robots interacting with dynamic environments,” Journal of Intelligent and Robotic Systems, Vol. 19, pp. 119-152, 1997. Yunying Wu, “Force regulation and contact transition control,” Ph.D. Thesis, Washington University, Saint Louis, 1996. T. J. Tarn, Y, Wu and Ning Xi and A. lsidori, “Force regulation and contact transition control,” IEEE Control Systems Magazine, Vol 6(1), February 1996. E. Degoulange, P. Dauchez and F. Pierrot, “Force control of an industrial PUMA 560 under environmental constraints: Implementation issues and experimental results,” IEEE International Conference on Robotics and Automation, 1993. 121 [55] [56] [57] [58] [59] [60] [61] [62] [63] H. Bruyninckx, J. DeSchutter and S. Dutre, “The ‘Recriprocity’ and ‘Consistance’ based approaches to uncertainty identification for compliant motion,” IEEE International Conference on Robotics and Automation, 1993. P. Sinha and A. Goldenberg, “A unified theory for hybrid control of manipulators,” IEEE International Conference on Robotics and Automation, 1993. R. P. Paul, “Robot manipulators: Mathematics, Programming and Control,” The MIT Press, 1981. K. S. Fu, R. C. Gonzalez and C. S. G. Lee, “Robotics: Control, Sensing, Vision and Intelligence,” McGraw-Hill Book Company, New York, 1987. R. J. Schilling, “Fundamentals of Robotics: Analysis and Control,” Prentice-Hall, Inc., Englewood Cliffs, 1990. T. J. Tarn, A. K. Bejczy, A. Isidori and Y. Chen, “Nonlinear feedback in robot arm control,” Proceedings of the 23rd Conference on Decision and Control, Dec.1984. Y. F. Zheng and H. Hemarrri, “Mathematical modeling of a robot collision with its environment,” Journal of Robotic Systems, Vol. 2(3) pp. 289-230, 1985. Unimate Industrial Robots and Cartesian coordinates, 397K], January 1983. Y. Nakamura and H. Hanafusa, “Inverse kinematic solutions with singularity robustness for robot manipulator control,” Journal of Dynamical Systems, Measurement and Control, Vol. 108, pp. 163-171, September 1986. 122 [64] [65] [66] [67] [68] [69] [70] M. Kircanski, N. Kircanshi, D. Lekovic and M. Vukobratovic, “An experimental study of resolved acceleration control of robots at singularities: Damped least- squares approach,” Journal of Dynamical Systems, Measurement and Control, Vol. 119, pp. 97-101, March 1997. S. Chiaverini, B. Siciliano and O. Egeland, “Review of the damped least-squares inverse kinematics with experiments on an industrial robot manipulator,” IEEE Transactions on Control System Technology, Vol. 2(2), pp 123-134, June 1994. D. N. Nenchev, “Tracking manipulator trajectories with ordinary singularities: A null space approach,” International Journal of Robotics Research, Vol. 14(4), 1995. Jindong Tan and Ning Xi, "Hybrid system design for singularityless task level robot controllers", Proceedings of the IEEE International Conference on Robotics Automation, April 2000. T. J. Tarn, A. K. Bejczy, G. T. Marth and A. K. Ramadori, “Kinematic characterization of the PUMA 560 manipulator,” Robotics Laboratory Report, SSM-RL—9l-15, December 1991. G. Antonelli, S Chiaverini and N. Sarkar, “An explicit force control scheme for underwater vehicle-manipulator systems,” Proceedings of the 1999 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 136-141, 1999. P. Fraisse, F. Pierrot and P. Dauchez, “Virtual Environment for Robot Force Control,” IEEE International Conference on Robotics and Automation 1993. 123 [71] T. Kurfess, D Whitney and M. Brown, “Verification of a dynamic grinding model,” Journal of Dynamical Systems, Measurement and Control, Dec 1988 Vol. 110 pg. 403 409. [72] A. Rodic & M. Vukobratovic, “Regulator of Minimal variances in Hybrid Control strategy of Manipulation Robots,” IEEE International Conference on Robotics and Automation. May 1992. [73] J. DeSchutter, “Improved force control laws for advanced tracking applications,” IEEE International Conference on Robotics and Automation, 1988. [74] D. E. Whitney, “The mathematics of coordinated control of prosthetic arms and manipulators,” Journal of Dynamic Systems, Measurement and Control, Vol. 122, pp. 303-309, 1972. 124