u. .1 J 1. .J. V uuwuxéf‘. y: — 1 . e. a, 3.4 $, . 6., y ”.8. am 9»? . manqéfiinbic ,. .uvufifikhn. .X. ‘ v.1 ‘ g 2 :UWM . . .k .. gnu...” «3.1 T . 6F”. anééa 3a .3 Jn .1. ... “9‘1... . l .92.. «13$: This is to certify that the dissertation entitled CONTROL OF MOBILE MANIPULATORS presented by JINDONG TAN has been accepted towards fulfillment of the requirements for PH. D degree in W COMPUTER ENGINEERING Major professor Date 05/08/07 MS U i: an Affirmative Action/Equal Opportunity Institution 0~ 12771 LIBRARY Michigan State University PLACE IN RETURN BOX to remove this checkout from your record. To AVOID FINES return on or before date due. MAY BE RECALLED with earlier due date if requested. DATE DUE DATE DUE DATE DUE 6/01 cJCIRC/DateDue.p65~p.15 CONTROL OF MOBILE MANIPULATORS BY J INDONG TAN A Dissertation Submitted to Michigan State University in partial fulfillment of the requirements for the degree of DOCTOR OF PHILOSOPHY DEPARTMENT OF ELECTRICAL AND COMPUTER ENGINEERING 2002 ABSTRACT Control of Mobile Manipulators By Jindong Tan This dissertation presents a new class of control and coordination schemes for mo- bile manipulators. A mobile manipulator, which consists of a mobile platform and a robot arm, provides a new concept and direction in robot applications due to its capability of infinite motion in a large workspace and dexterous manipulation. Mo- bile manipulators have various applications in rescue and hazardous fields, battlefield, homes, offices, hospitals, etc. However, motion planning and control of such a system is challenging. This dissertation first presents the design of non-time based tracking controller and its application to nonholonomic mobile platforms. The non-time based tracking control is a key step to ensure the robustness of the mobile manipulator to unexpected obstacles and coordination errors. Then, singularity analysis and motion control of robot arms is discussed. Robot arm singularity avoidance is considered as one of the criteria for coordinating the motion of the mobile platform and the robot arm. Using a hybrid system approach, a singularity-free robot motion controller has also been deve10ped for controlling the robot arm. Further, a unified dynamic model for the integrated mobile platform and on—board manipulator arm is developed. This provides an efficient and convenient framework to design a mobile manipulator controller as well as plan its motions. Combining the non-time based planning and control method with the nonlinear feedback technique, motion and force controllers at task level are designed. An online non-time coordinating scheme is developed to resolve the kinematic redundancy. Based on the decoupled and linearized model, this dissertation discusses an online non—time based coordination scheme. Force/ position control along the same task direction is originally proposed in this dissertation. The proposed force/ position control scheme is illustrated using a cart pushing control ap- plication, which requires both force control and motion control along the same task direction. An integrated task planning and control scheme while the mobile manip- ulator interacts with such moving objects is proposed. The proposed approaches are implemented and tested on a mobile manipulator consisting of a Nomadic XR4000 and a Puma 560 robot arm. Copyright by@ J INDONG TAN 2002 To my mother, my grad parents, my brother and for the memory of my father ACKNOWLEDGEMENTS Foremost, I gratefully acknowledge all the invaluable help from my advisor, Dr. N ing Xi, without whom this dissertation would not have been possible. This disserta— tion comes from numerous discussions in his office, from his keen insight knowledge, from his guidance to a fruitful research area, and his perseverance and support. Not only the knowledge I learnt here, but also the research experience and friendship I gained here will be beneficial to the rest of my life. I would like to thank all my graduate committee members, Dr. Hassan Khalil, Dr. Fathi Salam, Dr. Robert Schlueter and Dr. Zhengfang Zhou. In addition to their invaluable time they spend on the committee meeting, their insightful comments and suggestions will enhance the technical soundness of this dissertation. I also would like to thank Dr. Wei Kang from Naval Postgraduate School for his valuable help. I am grateful to my friends and colleagues from Robotics and Automation Lab. The discussions with Weihua Sheng, Imad Elhaj j, Jizhong Xiao, Amit Goradia, Hen— ning Hummert, Yu Sun and Heping Chen substantially contributed to my work and broaden my knowledge. Finally, I would like to reiterate my gratitude to my parents, my grandparents, my brother and sister-in-law. Without their encouragement and continuous support for years, I would not have achieved what I have done today. vi TABLE OF CONTENTS 1 INTRODUCTION 1 1.1 Motion Tracking Control of Mobile Robots ............... 2 1.2 Singularity Analysis of Robot Arms ................... 4 1.3 Motion Planning and Control of Mobile Manipulators ......... 7 1.4 Mobile Manipulation Using Force Control ............... 12 1.5 Contributions and Outline of this Dissertation ............. 14 2 MOTION REFERENCE PROJECTION 17 2.1 Introduction to Event-Based Planning and Control .......... 17 2.2 Reference Projection Method ...................... 20 2.3 Stability .................................. 23 2.3.1 Example .............................. 24 2.3.2 Error dynamics and stability ................... 26 2.3.3 Feedback design .......................... 31 3 TRACKING CONTROL OF THE MOBILE PLATFORM 35 3.1 An Introduction to the Control of Mobile Platforms .......... 35 3.2 Path Tracking Control for Mobile Robots ................ 36 3.2.1 Mathematical Model of the Robot ................ 36 3.2.2 Tracking Curves with no U-turn ................. 38 3.2.3 Tracking Circles and Ellipses ................... 42 3.3 Experiments and Results ......................... 45 3.3.1 Tracking Sinusoid Waves ..................... 45 3.3.2 Tracking Circles .......................... 48 3.3.3 Tracking Ellipses ......................... 48 vii 4 SINGULARITY ANALYSIS AND SINGULARITY-FREE CONTROLLER OF THE ARM 53 4.1 Introduction ................................ 53 4.2 Robot Arm Singularities ......................... 56 4.2.1 Non-redundant Robot with Spherical Wrist ........... 57 4.2.2 Non-redundant Robot with Non-spherical Wrist ........ 61 4.2.3 Singularity Analysis for a Redundant Robot .......... 62 4.2.4 Workspace and Singular Conditions ............... 63 4.3 Hybrid Robot Motion Controller ..................... 64 4.3.1 Hybrid System Model ...................... 64 4.3.2 Design Procedure of a Hybrid Robot Motion Controller . . . . 65 4.3.3 Hybrid Motion Controller for a PUMA-like robot manipulator 68 4.4 Stability Analysis of the Hybrid Motion Controller ........... 78 4.5 Experimental Implementation and Testing ............... 83 5 CONTROL OF MOBILE MANIPULATORS 89 5.1 Introduction ................................ 89 5.2 Unified System Model .......................... 91 5.2.1 Kinematic model ......................... 91 5.2.2 Dynamics Model ......................... 93 5.3 Motion Control and Redundancy Resolution .............. 95 5.3.1 Tracking Controller ........................ 95 5.3.2 Singularity Analysis and Redundancy Resolution ....... 96 5.4 Experimental Implementation and Results ............... 100 5.4.1 Hardware and Software Implementation ............ 100 5.4.2 Path Tracking and Tele-operation ................ 101 5.4.3 Path Tracking with Unexpected Obstacles ........... 102 viii 6 INTEGRATED SENSING AND CONTROL OF MOBILE MANIPULATOR108 6.1 Introduction and Force Control Schemes Revisit ............ 108 6.2 Output Force/ Position Control of Redundant Robots ......... 110 6.3 Integrated Task Planning and Control ................. 113 6.3.1 Motion Planning of the Cart ................... 114 6.3.2 Force Planning of the Cart .................... 116 6.3.3 Integration of Sensing, Task Planning and Control ....... 119 6.4 Experimental Implementation and Results ............... 121 6.4.1 Pushing Along a Straight Line .................. 122 6.4.2 Turning the Cart at a Corner .................. 123 6.4.3 Pushing Along a Sine Wave ................... 124 7 CONCLUSIONS 126 ix LIST OF TABLES 1.1 1.2 1.3 2.1 3.1 3.2 3.3 3.4 3.5 3.6 3.7 3.8 4.1 4.2 4.3 4.4 4.5 4.6 4.7 4.8 4.9 LIST OF FIGURES A mobile manipulator .......................... Path tracking of mobile manipulators .................. The mobile manipulator and nonholonomic cart system ........ The comparison .............................. The configuration of a mobile robot. .................. Tracking a sinusoid curve with initial error ................ Time based control vs Non-time based control .............. Tracking a circle without initial error .................. Tracking a circle without initial error .................. Tracking an ellipse without initial error ................. 'Itacking an ellipse with initial error ................... Tracking an ellipse with present of unexpected obstacle ........ Singular configurations of Puma 560 .................. Illustrative plot of the robot workspace and subspaces ........ Hybrid system controller framework ................... A typical switching sequence of the hybrid motion controller ..... Experimental Setup ............................ At boundary singular configuration, with DLS method only. Dash line: Desired; Solid line: Actual ........................ At boundary singular configuration, with hybrid motion controller. Dash line: Desired; Solid line: Actual .................. At interior singular configuration, with DLS only. Dash line: Desired; Solid line: Actual ............................. At interior singular configuration, with hybrid motion controller. Dash line: Desired; Solid line: Actual ..................... xi 47 49 49 50 50 51 75 81 83 84 84 85 85 4.10 With both interior and wrist singular configurations. Dash line: De— sired; Solid line: Actual .......................... 86 4.11 With wrist singularity. Dash line: Desired; Solid line: Actual ..... 86 5.1 Mobile manipulators ........................... 90 5.2 Event-based coordination between the arm and the mobile platform . 98 5.3 Hardware structure of the mobile manipulator ............. 101 5.4 Software structure of the mobile manipulator .............. 104 5.5 Multiple tasks path tracking ....................... 105 5.6 Multiple tasks path tracking ....................... 105 5.7 Tele—operation ............................... 106 5.8 Tele-operation ............................... 106 5.9 Teleoperation with unexpected obstacles ................ 107 6.1 Hybrid Force/ Position Control ...................... 109 6.2 Impedance Control ............................ 110 6.3 Hybrid Force/ Position Control for Redundant Robots ......... 111 6.4 Integrated Task Planning and Control for the Mobile Manipulation of a Nonholonomic Cart ........................... 114 6.5 A mobile manipulator and a nonholonomic cart ............ 115 6.6 Configuration of the cart in the moving frame ............. 120 6.7 Laser range data ............................. 121 6.8 Pushing the cart along a straight line .................. 122 6.9 Pushing with unexpected obstacle .................... 123 6.10 Turning the cart at an corner ...................... 124 6.11 Pushing the task along a sine wave ................... 125 xii CHAPTER 1 INTRODUCTION A mobile manipulator, which generally consists of a robot arm and a mobile platform, provides a new concept and direction in robot applications due to its capability of large workspace and dexterous manipulation. The robot arm is mounted on a mobile platform, as shown in Figure 1.1. A mobile manipulator combines the mobility of the platform and the dexterous manipulation capability of the robot arm. The manipu- lation capability on a mobile platform is called mobile manipulation[37]. The mobile manipulation capability is important for some autonomous mobile robot applications, which require both a large workspace and dexterous manipulation capability, such as reconnaissance and surveillance in rescue and hazardous fields, battlefield, homes, of- fices, hospitals, etc. The mobility of the (mobile) platform substantially increases the performance capabilities of the system. For example, the platform increases the size of the manipulator workspace and enables the manipulator to position the end—effector for executing the task efficiently, while a fixed base manipulator arm is limited by the size of its workspace. From a different point of view, the mobile manipulator has high degree of kinematic redundancy. Because of the kinematic redundancy, the func- tionality of the mobile manipulator is increased to a greater extent. The redundant degrees of freedom can be used to accomplish secondary tasks. For example, joint torques can be optimized, singular configurations of the manipulator can be avoided and hybrid force/ position control can be achieved. The objective of this dissertation is to investigate the modeling, redundancy res- olution and control, integrated task planning and control of mobile manipulators. The unique force control prOperties while the mobile manipulator interacting with moving objects has also been probed. Among others, the following issues of mobile manipulators will be discussed: -x r Figure 1.1: A mobile manipulator 0 Motion tracking control of mobile robots o Singularity analysis of robot arms 0 Singularity-free tracking control of robot arms. 0 Unified model of mobile manipulators and online redundancy resolution 0 Decoupled force/ position control along the same task direction 0 Integrated tasking planning, sensing, and control of mobile manipulators A quick survey of the ongoing research on related topics is given bellow. 1.1 Motion Tracking Control of Mobile Robots Most of the tasks of a mobile robot, which generally equipped with a computer and sensors, are to move from one configuration to another configuration, or track certain path in its workspace. The research on mobile robots is generally focused on motion planning, localization, navigation, sensor fusion, tracking controller design. In this dissertation, a non-time based approach for the path tracking controller design of mobile robots will be discussed. The non-time based robot controller enables the robot to handle unexpected events while tracking certain path. The non-time based approach can also be used for the coordination of the mobile platform and the robot arm. In the control of autonomous mobile robots, unexpected events may change the speed and path of a vehicle during a mission (such as traffic light, unexpected obsta- cles, and system malfunction). When the unexpected event is over, the robot should recover its normal performance smoothly. A time-dependent trajectory controller drives a vehicle to its desired place at any given time. It speeds up the vehicle after a temporary slow down to catch up with the schedule. However, in many applica- tions the important task is to track the desired path with the desired speed, keeping up with the schedule is not a first priority or not desired. A common approach for this problem is to use a high-level rule-based controller to decide the speed and the reset time. However, the non-time based controller proposed in this dissertation is a different approach. The proposed controller is independent of time. For the purpose of path tracking (with a desired speed depending on the location of the path), the controller recovers the normal path and speed from unexpected event without reset- ting time or speed. Furthermore, the algorithm introduces a reference projection, a function of state variables that replaces the time variable. When multiple vehicles are involved in the same mission (such as formation con- trol of multiple agents), reference projection is a tool that can be used for cooperated motion control of multiple vehicles. This idea of using reference projection for coop— erated control was used in the formation control of multi—satellite systems ([71] and [34]). For the mobile manipulator, instead of using time as the reference, the motion of the mobile robot is referred to the trajectory of the end effector of the robot arm. The motion reference will be developed to coordinate the motion of the platform and the robot arm according to certain criteria such that the robot arm will always working in a dexterous workspace, as shown in Figure 1.1. Developing non-time based controller has attracted the attention of researchers from different fields. In [81], a non-time based controller called event-based design was introduced. It has successful applications in a variety of robot control problems ([81], [80], [76], [83] and [78]). In addition to robot control, [32] studies the tracking of a trimming trajectory for unmanned air vehicles using non—time based control. In [51], non-time based control of autonomous underwater vehicles is discussed. In [34], a time varying feedback is converted to a non-time based controller to achieve coordination among multiple satellites flying in formation. Although different terminologies were used in different applications, the author found that finding a path tracking controller independent of time is a common interest among a variety of application areas (see additional publications [22], [57], [59], [28], [25], [63], [33] and the references therein). The contribution of our work is to introduce a reference projection method in the design of non-time based controller. 1.2 Singularity Analysis of Robot Arms A robot task is usually described in its task space. The direct implementation of a task level controller can provide significant application efficiency and flexibility to a robotic operation[68]. Implementation of task level controller becomes even more important when robots are working in coordination with humans[79]. The human intuition on a task is always represented in the task space. The existence of kinematic singularities is a major problem in the application of task level controller. While approaching a singular configuration, the task level controller may generate high joint torques, which result in instability and large errors in the task space. The task level controller is not only invalid at the singular configurations, but also in certain neighborhood of the singular configurations. Therefore, the singularity of robot arms is analyzed and a singularity-free path tracking controller is proposed. Different from a nonredundant robot arm, a specific task for mobile manipulators can be performed by moving only the mobile platform or the manipulator, or moving the platform and the manipulator simultaneously, as depicted in Figure 1.2. It is ideal to coordinate the motion of the platform and the arm in such a way that tasks can be performed by moving them simultaneously. This involves the kinematic redundancy resolution of mobile manipulators. The objective to resolve the redundancy is to position the mobile base such that the robot arm will always be in a dexterous configuration and avoid singular configuration. The singularity analysis of robot arms is then used in the coordination of the mobile platform and the robot arm. A brief review of the research around this topic is given below. Va A 7 Figure 1.2: Path tracking of mobile manipulators The research in robot arm singularities has attracted researchers’ attention. The Singularity-Robust Inverse(SRI) [15, 17, 41, 47] method was developed to provide an approximate solution to the inverse kinematics problem around singular configura- tions. Since the Jacobian matrix becomes ill-conditioned around the singular config- urations, the inverse (pseudoinverse) of the Jacobian matrix results in unreasonable torques in task level controller. Instead of using exact motion control resulting in large torques, the SRI uses Damped Least-Squares(DLS) to provide approximate motion control close to the desired Cartesian trajectory. Chiaverini, Siciliano and Egeland [17] refined the basic DLS method to improve tracking errors from the desired path by varying the damping factors. Caccavale, Chiaverini and Siciliano elaborated the DLS method by considering the velocity and acceleration variables [11]. These methods reduce the torque applied to individual joints and achieve approximate motion. By allowing an error in the motion, the SRI method makes the robot pass close to the singular point but not go through it. Actually, it can be shown [41, 50] that the sys- tem is unstable at the singular points, which means either the singular points can not be actually reached or the robots can not start from the singular points. Obviously, this creates difficulties for many applications. Kiréanshi et al. [41] theoretically proved that at the singular points, the resolved acceleration control of robots has unstable saddles. The instability of this method for redundant mechanisms is discussed in [50]. The path tracking approach [38, 39, 48] augments the joint space by adding virtual joints to the manipulator and allowing self motion. Based on the predictor-corrector method of path following, this approach provides a satisfactory solution to the path tracking problem at singular configurations. Timing is not considered at the time of planning and it is reparameterized in solving the problem. However, when a timing is imposed on the path, it forces the manipulator to slow down in the neighborhood of singular configurations and to stop at the singular configuration. The normal form approach in [46] provides a solution of inverse kinematics in the entire joint space. With appropriately constructed local diffeomorphic coordi- nate changes around the singularity, the solution of inverse kinematics can be found and then transformed back to the original coordinates. The normal form method in- volves heavy computation. Hence is extremely difficult to implement it in real-time. Duleba[23] proposed the the channel algorithm of transversal passing through singu- larities for non-redundant robot manipulators. The basic idea of channel algorithm is to smoothly change a component of the joint space as the neighborhood of singu- larity is entered. Thus a continuous joint coordinate can be obtained while tracking a desired path in a task space. Another method has also been developed to incorporate the dynamic poles of the system by Sampei and Furuata [56]. They proposed a time re-scale transformation method for designing a robot controller, which achieves slow poles in the vicinity of singularity and fast poles in the regular area. It can be shown that this method results in similar error dynamics to the SR1 method. For the application of switching control to the robot manipulators, Bishop and Spong [7] proposed a switching control method for redundancy resolution of redundant robots. The damped velocity method was used to maximize the manipulability. Torques and joint velocities are limited utilizing the redundancy of the robot. 1.3 Motion Planning and Control of Mobile Manipulators Mobile manipulators provide numerous advantages compared with the fixed base manipulator, path planning and control of such a system are challenging. First, there is a redundancy resolution problem. Secondly, the dynamic model of a mobile manipulator is complex, and there is a high degree of coupling between the platform and the arm. The dynamics of the mobile platform and the manipulator arm are generally different. The former is heavy and has slow dynamics. The later is relatively light and has fast dynamics, 216., a wider bandwidth. Thirdly, many applications of the mobile manipulator require the robot to interact with its environment dynamically while providing motion, such as pushing, pulling, cutting, excavating. Implementation of these tasks demands the mobile manipulator to provide both output force control to overcome the resistance and friction of the objects, and motion control to track certain trajectory. Force control schemes for the mobile manipulator are therefore to be considered. The myriad use of the mobile manipulator and the difficulties in its control and motion planning bring a lot of research issues to light. Motion planning, modeling and control, coordination between the mobile platform and the manipulator arm, force control are tOpics of this dissertation. Previous work related to these topics in the literature is reviewed. A: Motion Planning. A solution to kinematic redundancy is to coordi- nate the mobile platform and robot manipulator to perform secondary tasks. Seraji [62] proposed an on—line coordinated motion control method by treating the mobile manipulator as a redundant mechanical systems. The mobility and manipulation are treated in a common framework. Secondary tasks for redundancy resolution are selected based on task specifications. Egerstedt and Hu [24] proposed a coordinated virtual vehicle solution to the tra- jectory following problem for mobile manipulators. Given a desired trajectory for the end-effector, a trajectory for the base is planned in such a manner that the desired end effector position is within the specified workspace of the arm. The mobile plat- form and the manipulator then follow their respective reference trajectories. The two subsystems are controlled separately and coordinated. The kinematic redundancy can also be resolved using criteria which are used to optimize application specific requirements, such as minimum torque output, minimum distance, etc. Chen and Zalzala [13] discussed trajectory generation while taking the dynamics of the system into consideration. The optimal trajectory generation problem with nonholonomic constraints and obstacle avoidance were formulated as a nonlinear multi-criteria Optimization problem. Given the trajectory of the end- effector, near optimal trajectories for the mobile platform and manipulator joints are obtained by an efficient genetic algorithm. The algorithm considers the optimization of torque and obstacle avoidance. When a mobile manipulator performs multiple tasks in a sequence, the final config- uration of each task becomes the initial configuration of the subsequent task. The con- figuration between two tasks is called commutation configuration [54]. In the motion planning for a sequence of tasks, the motion trajectories together with commutation configuration need to be considered simultaneously. If the commutation configura- tions are determined without considering the current task in a global perspective, a subsequent task may be rendered difficult. Most of the redundancy resolution ap- proaches for executing a sequence of tasks are criteria based. For instance, Pin and Culioli [54] used multi-criteria Optimization schemes for motion planning. The com- mutation configurations become boundary conditions for each task in the sequence. A variety of optimization criteria such as obstacle avoidance, reach and maneuverability are developed and the algorithms for optimization problems are discussed. However, the algorithm is off-line and nonholonomic constraints were not considered. Zhao, Ansari and Hon [85] formulated multiple task path planning as a nonlinear and non- polynomial optimization problem. The optimal sequence of platform positions and manipulator configurations are obtained by using genetic algorithms. Lee and Cho [42] pr0posed a different motion planning method for mobile manipulators to execute a sequence of tasks. They considered the motion trajectories and the commutation configurations simultaneously in a global cost function instead of treating them differ- ently. A global optimization problem using the necessary conditions, which minimize the objective function, can be applied to both holonomic and nonholonomic systems. From the above review, it can be seen that motion planning of the platform and the manipulator arm is crucial for the coordination of the mobile manipulator. The existing approaches to coordinate the platform and the robot arm, or in the other world, to resolve the kinematic redundancy, are either based on additional tasks or based on the optimization of criteria. For the additional tasks based approach, the end effector tasks may require an extension of the end effector out of its workspace. Since the motion of the platform and the manipulator is treated equally, mobility and manipulation ability are not optimized. For the criteria optimization based approach, a near optimal path for both the platform and the manipulator arm can be obtained. However, it is generally computationally intensive and not eligible for realtime control. Since most of the optimization algorithms for redundancy resolution is computation intensive and have to be used off-line, this dissertation presents an online coordination scheme based on the event-based approach. It optimizes the manipulation capability of the robot arm and can avoid unexpected obstacles. The kinematic redundancy resolution will be investigated based on the analysis of the robot arm singularities in this dissertation. B: Modeling and Control. To achieve effective and practical control, vari- ous dynamic models and control schemes are proposed in the literature. The control strategies are generally classified into two categories. One is decentralized control of the mobile platform and the manipulator arm. The controller for each part is constructed separately and then the interaction between the platform and the ma- nipulator is considered. The other is a unified approach for the control of both the mobile platform and the robot arm. In this case, the mobile manipulator is treated as a redundant manipulator. By considering the mobile robot and the manipulator as two subsystems, Liu and Lewis [43] developed a decentralized robust controller for a mobile manipulator. The reaction forces are considered in the dynamic models of the two subsystems. However, only the translation of the mobile robot is considered in the paper. Chung and Velinsky [18, 19] also derived the dynamic model of nonholonomic mobile robots and manipulators separately. The kinematic redundancy is resolved by decomposing the mobile manipulator into two subsystems, the mobile platform and the manipulator. A robust interaction control algorithm, in which suitable controllers are designed for the two subsystems, is developed. The algorithm is employed to minimize the adverse effect of the wheel slip on the tracking performance. Yamamoto and Yun [84] studied a two—linked planar mobile manipulator subject to nonholonomic constraints. The dynamics model was derived and reformulated according to the nonholonomic constraints. Output feedback linearization is used to simplify the nonlinear model and derive the controller. The mobile platform and 10 the manipulator are coordinated based on the concept of a preferred operating re- gion. However, if the manipulator was commanded to follow a spatial trajectory in the task space, it is difiicult to keep the arm in a preferred region during a task execution. Colbaugh, Trabatti and Glass [20] considered the dynamics model for redundant nonholonomic mechanical systems and reformulated it into two parts, a reduced dynamics model and a kinematic relationship which considers the nonholo- nomic constraints. Adaptive control schemes were developed based on the two parts. The mobile platform and the arm are coordinated such that the distance between the end-effector and the mobile base does not exceed a pre-specified value. Two problems are brought to light for the control structure of the mobile ma- nipulator system, one is to ensure the system output regardless of the output of the subsystems. It is therefore desirable to model the mobile manipulator such that mo- tion planning and force/ position control can be considered in a unified task space coordinate frame. The other is the system stability in the case of unexpected ob- stacles or desynchronization of the two subsystems. The non-time based controller design for the coordination is investigated for this purpose. C: Force Control Schemes. Robot manipulator force control schemes such as external force control, hybrid position/ force control, impedance control have been applied to the force control of the mobile manipulators. Umeda et. al. [73] discussed the hybrid position / force control of mobile manipulators. In order to obtain desired hybrid control with respect to the end-effector’s tasks, the concept of performance indices based on an equivalent mass matrix is introduced. A weight matrix is also proposed to obtain adequate torque distribution. Antonelli, Sarkar and Chiaverini [3] presented a force control scheme for underwater vehicle manipulator systems. The environment is modeled as a frictionless and elastically complaint plane. External control structure is used and kinematic redundancy is considered. Perrier, Cellier and Dauchez [53] compared the robustness of hybrid postion/ force control structure 11 and external force control structure with respect to external disturbances for a mobile manipulator. The effect of mobile platform velocities on the force control structures were discussed. Due to the kinematic redundancy, the force control and position control of mobile manipulators can be decoupled along the same task direction. This dissertatoin will discuss the decoupled force/position control based on a linearized and decoupled model. 1.4 Mobile Manipulation Using Force Control The integrated task planning and control of a mobile manipulator performing the task of a moving object has relatively received less attention. Based on the decoupled force/psoition control of mobile manipulator, this dissertation presents an integrated task planning and control approach for the task of the mobile manipulator pushing a nonholonomic cart. As shown in figure 1.3, the mobile manipulator and nonholo- Figure 1.3: The mobile manipulator and nonholonomic cart system nomic cart system is similar to a tracker-trailer system. In a tracker-trailer system, there is generally a steering mobile robot and one or more passive trailer(s) connected 12 together by rigid joints. The tracking control and open loop motion planning of such a nonholonomic system has been discussed in the literature. The trailer system was controlled to track certain trajectory using a linear controller based on the linearized model[22]. Exact linearization and nonlinear control of the track-trailer system has been discussed in [58] and experiments on backward steering the trailer have been reported in [40]. Instead of pulling the trailer, the tracker pushes the trailer to track certain trajectories in the backward steering problem. Fire truck steering is another example for pushing a nonholonomic system. A chained form has been used in the open loop motion planning of such a system [10]. Based on the chained form, motion planning for steering a nonholonomic system has been investigated in [45, 70]. Time varying nonholonomic control strategies for the chained form can stabilize the tracker and trailers system to certain configurations[60]. The planning and control of the mobile manipulator and nonholonomic cart system is different from a tracker-trailer system. They are not linked by a rigid joint, but the robot arm. Therefore, the mobile manipulator has more flexibility and control while maneuvering the nonholonomic cart. In a tracker-trailer system, control and motion planning were considered based on the kinematic model, and the trailer is steered by the motion of the tracker. In a mobile manipulator and nonholonomic cart system, the mobile manipulator can manipulate the cart by a regulated output force. The nonholonomic cart can be controlled at a dynamic level by the output force of the mobile manipulator. However, this requires full integration of the motion planning and force planning of the cart, and the planning and control of mobile manipulators. Though ample literature discusses force control in the context of manipulator arms, little of it considers the force control of mobile manipulator and none of it discusses the out force and position control along the same task direction. This dissertation considers the output force control and position control simultaneously making use of the kinematic redundancy. 13 1.5 Contributions and Outline of this Dissertation This goal of this dissertation is to investigate new control and coordination algorithms for a mobile manipulator. The contributions to the control and coordination strategy of mobile manipulators can be grouped into four parts. The first part discusses a motion reference projection method for non-time based tracking controller design and its application to nonholonomic mobile platforms. The proposed design method for non-time based tracking controller converts a controller designed using traditional time-based approach to a new non-time based controller. This approach significantly simplifies the design procedure to achieve a non-time based controller. This method is applied to tracking controllers of mobile platforms. The experimental results have clearly demonstrated the advantage of the non-time based tracking controller in the presence of unexpected obstacles. Applying this method to the mobile manipulators, a common motion reference based on the measurable system output and desired path can coordinate the motion of the platform and the manipulator arm. The advantage of the non-time based tracking controller while coordinating with robot is also shown. More importantly, the results provides an efficient and systematic approach to design non-time based tracking controller for general nonlinear dynamical systems. This part is discussed in Chapter 2, Chapter 3 and part of Chapter 5 The second part presents singularity analysis of manipulator arms and a hybrid system design of singularity-free motion controller for manipulator. Since the mobile platform creates redundancy to mobile manipulator system, the redundancy enables the manipulator arm to keep its optimal configurations and avoid singular config- urations. Chapter 4 analyzes the singular configuration of robot arms, which will provides a mechanisms for the coordination of the mobile platform and the robot arm. A singularity-free motion controller is also designed for the manipulator using a hybrid system approach. The third part formulates the unified control of mobile manipulators and discusses 14 the redundancy resolution. A unified dynamic model for the mobile manipulator is derived and nonlinear feedback is applied to linearize and decouple this model. The advantage of this model lies in that the kinematic redundancy and dynamic properties of the platform and the robot arm have been considered in the unified model. Based on this model, mobile manipulator tasks such as path tracking control, force / position control can be easily designed. Based on the analysis of the robot arm, an online event-based coordination scheme is proposed for the cooperation of the platform and the robot arm. This online approach is suitable for a point-to—point task, multiple segment tasks and teleoperation. The mobile manipulator system is stable even in the case of appearance of unexpected obstacles and the manipulation capability can adapt to a best possible values during the operation. This part is discussed in Chapter 5. The forth part discusses the force/ position control while the mobile manipula- tion interacting with moving objects. Based on the decoupled and linearized model, force/ position control along the same task direction is discussed utilizing the kine- matic redundancy of mobile manipulators. For tasks that require both force and position control along the same task direction, such as pushing a nonholonomic cart, an integrated task planning and control for mobile manipulators is proposed. The approach for manipulating the nonholonomic cart integrates the planning and control of the mobile manipulator, the motion planning and force planning of the nonholo- nomic cart, and the estimation of the cart configuration based on the sensors of the mobile manipulators. The integrated task planning and control enables the robot to interact with moving objects and complete complicated tasks by the output force. The approaches discussed in this dissertation can be used for the motion planning and control of mobile manipulators which can find applications in manufacturing, hospi- tals, homes and offices. Experimental results have demonstrated the advantage of the integrated task planning and control approach. This part of the work is presented in 15 Chapter 6 Chapter 7 concludes this dissertation. 16 CHAPTER 2 MOTION REFERENCE PROJECTION 2.1 Introduction to Event-Based Planning and Control The basic idea of event-based planning and control [77] is to introduce the concept of a motion reference, a parameter of the desired path, as shown in Figure 2.1(b). This parameter is directly relevant to the sensing measurement and the task. Instead of time, the control input is parameterized by the action reference parameter. Since the action reference is a function of the real time measurement, the values of the desired vehicle states are functions of the measured data. As a result, for any given instant of time, the desired input is a function of system output. This creates a mechanism to adjust or modify the plan based on the measurements. Thus, the planning becomes a closed loop real-time process. The plan (desired path) can be perceived as an “abstract” entity, like a function. It only has a “real” value when measurements are made. The non-time based planning and control scheme and the traditional time-based planning and control scheme are compared in Figure 2.1(b). In this dissertation, event-based and non-time based are used alternatively. In the non-time based planning and control scheme, the function of the Motion Reference block in Figure 1(b) is to compute the action reference parameter s on- line, based on sensory measurements. The planner generates the desired value to the system, according to the on-line computed action reference parameter s. In addition, the motion reference parameter is calculated near or at the same rate as the feedback control. In other words, the action plan is adjusted at very high rate, which enables the planner to handle unexpected or uncertain discrete and continuous events. For example, the event-based planning and control scheme has been successfully applied to deal with unexpected obstacles in a robot motion [81]. If the robot is blocked by obstacles, the action reference also stOps evolving. As a result the desired input will 17 d(t) cm W) Planner y + Controller Robot 4 (a) Traditional control [dm d(S) e(5) y(t) Planner y + Controller Robot : 8 Motion Reference (b) N ontime based control Figure 2.1: The comparison be constant and the error will remain unchanged. In this case, if the conventional time-based motion plan were implemented, the system error would keep increasing as the planner generates the desired input according to the original fixed plan. The system would therefore become unstable. To solve this problem with the traditional method, one has to incorporate a higher-level monitoring function such as “move until touc ” or some kind of compliant motion strategy, both requiring extra sensing (force/ torque or touch/contact sensing, see [44] and [16]). Furthermore, once the obstacle is removed, the traditional methods require planning again to complete the task. But our non-time based planning and control in this case does not require any extra force-torque or contact sensing, or planning again. The experimental results in [81] clearly demonstrate the feature. For a path tracking task, the most natural thought is to pick the distance traveled along the given path as the motion reference 3. After .9 is chosen, the motion along 18 the given path can be described as [77] dif— 2.1 fl_ ( ) (It—a where v and a are velocity and acceleration along the motion reference .9. In order to obtain an non-time based trajectory plan, it is necessary to perform a transfor- mation of variable by replacing the independent scalar variable t in eq. (2.1) by the independent scalar variable 3, which is computed from system output. Basically, the event-based trajectory planning is to find the motion profile as a function of the given event-based motion reference 3, v 2 12(3) and a = a(s), subject to the kinematic and dynamic constraints of the robot. Different from [7 7], which focuses on applications of robotic arms, the framework here is based on general dynamical systems of moving vehicles, including robotic arms, ground vehicles, aircraft, spacecraft, and surface and underwater vehicles. Ex- cept for some ideas used in [77], the viewpoint adopted in this dissertation is different from some existing non-time based approach. Instead of developing design algorithms based on the specific features of the desired path ([32]), the controllers are first de- signed in the time domain using any existing tracking controller design method. Then, the time variable in the feedback is substituted by a special transformation called a motion reference projection. The final resulting feedback becomes non-time based feedback. It drives the system asymptotically approaching the desired path. This approach has the following four advantages. First, the model of the system is a general nonlinear dynamical control system. The method is applicable to a wide range of tracking control problems of autonomous vehicles or robotic arms. It has the potential of performing cooperated control of multiple vehicles with different models moving in formation ([71] and [34]). Second, a transformation is provided to transform 19 a time dependent controller into a non-time based controller. It bridges the traditional state trajectory tracking with the non-time based design. The results enable us to design non-time based feedback using the existing, well known methods of feedback design in time domain such as LQR and H00 control, and feedback linearization. The third advantage is the flexibility in choosing motion references. The arc length of the desired path is often used as the motion reference. However, there are many curves for which the formula of computing the arc length is complicated. Parameters other than arc length can be used as motion references in the design to simplify the controller. In the proposed method, the motion reference can be any parameter. The fourth advantage of our method is that the transformation from the state space to the action reference needs not to be the orthogonal projection. For many curves, orthogonal projection requires on-line numerical solution of minimization. Our method allows the designer to use any mapping satisfying a projection condition. A good choice of the reference projection can dramatically reduce the on—line computational load. In the following, the concept of reference projection method is first introduced. The stability of the non-time based controller is studied and a general method of non-time based tracking control feedback design is developed. 2.2 Reference Projection Method Mechanical systems and electrical systems are modeled by differential equations in which the free variable is time t. A desired trajectory is usually modeled as a function of time, which is generally denoted by :rd(t). The variable :1: represents the state of the system. Controllers are designed so that the trajectory of the system :z:(t) asymptotically approaches the desired trajectory :rd(t). This is a typical tracking control problem. There exist a lot of control design methods in the literature of tracking control. Since the design is based on the model driven by t, the controller is likely to be time dependent. However, in many applications of autonomous vehicles, 20 a feedback controller independent of time is preferred. Furthermore, the desired path is not necessarily defined as a function of time. It is defined by parametric equations. The parameter is called motion reference or action reference. It is denoted by 3. For instance, a desired path can be defined by a parametric equation :cd(s), where s is the arc length, or s is the projection of :54 to a coordinate axis. In this section, we prove a result, which transforms a time dependent feedback controller into a time independent controller to asymptotically track a desired path driven by action reference. A system is defined by the equation a: = f(:1:,u), :1: E 3f",u E 33’", y = h(:r), y E iii" (2.1) where u is the control input, and :1: is the state of the system. The desired path usually does not involve all the state variables explicitly. For example, to make a mobile robot following a curve, the desired path is defined by the position of the system. The state variables such as the orientation of the vehicle, the angle of the front wheels do not appear explicitly in the desired path. Suppose that y = h(:r) represents the output of the system. The desired path is given by a parametric equation 31 = MKS), where s is the motion reference. The first step in the controller design is to define a corresponding path in time domain. If s is not time, then a strictly increasing function 3 = v(t) is assigned. How to pick the function v(t) depends on the desired speed. For example, if the problem requires that the system is operated so that s is increasing at a constant speed v0, then DUI) = ’Uot. 21 Notice that the desired trajectory is defined by two functions, yd(s) and s = v(t), not by a single desired trajectory yd(t). We do this on purpose because moving a vehicle along a curve, and how fast a vehicle moves are two relevant but different issues in the planning. They should not be mixed together. In the implementation of the tracking control, v() can be a proportional to time t, such as v(t) = vot. It can also be related to the present of obstacles and the task specifications. Denote d as the distance form the robot to the obstacle in the moving direction, v(-) can be designed as v(d, t) such that obstacle avoidance can be considered. The second step is to find a feedback, u(a:, t), to track the path yd(v(t)). There exist many well-known design algorithms in the literature of control theory. The feedback satisfies 1im> — mm» = 0. t—aoo Furthermore, if the initial condition is on the desired path, then the trajectory of the controlled system follows the path. More specifically, there exists an initial condition of the system x0 such that h(;r(t)) = yd(t). Denote this path by 32,;(3) or zd(v(t)). The third step is to find a suitable transformation, .9 = 7(3), from the state space to the reference 3. The transformation satisfies «was» = s (22) For example, given any state .100, let :rd(so) be the orthogonal projection from $0 to 112.1(3). If 7(320) = so, then it satisfies (2.2). The transformation satisfying (2.2) is called a reference projection. However, orthogonal projection is not the only way to define 'y. In section 4.1, the projection to :r-axis is adopted as 7, which is a simpler formula than orthogonal projection for the specific application studied in that section. The last step is to construct the non-time based feedback. A virtual time is defined 22 T(:r) = v‘1(7(:r)). (2-3) It is worthy noting that virtual time is given as a function of system output. The system output is finally a function of time, the actually system operation is considered in the virtual time T(;r). By substitute the virtual time T(:z:) for real time t, the non- time based feedback is u(:1:) = u(:r, T(:r)) (24) where 21(23, t) is the feedback found in the second step, 7(a) is a reference projection. The closed-loop system is Ii“ = f(:1:,u(a:,T)). (2-5) 2.3 Stability In the above design process, the feedback u(a:,t) drives the system asymptotically approaching xd(t). The non-time based design is given by substituting the time variable t in u(:r, t) by the virtual time v’1(7(:r)). Because of (2.2), the trajectory :r(t) of (2.5) equals :rd(t) if .r(0) is on the desired trajectory rd. Based on our analysis and simulations, the non-time based feedback is likely to drive the system asymptotically approaching $d(s). However, the substitution t = T(:r) may change the stability of the system in some cases as shown in the following example. In this section, sufficient conditions are given to guarantee the asymptotic stability of the tracking error under non-time based feedback. 23 2.3.1 Ezrample Two feedbacks are given in this example for the same tracking problem. One feedback result in a stable non-time based controller, and the other one result in an unstable controller. The control system is linear, 271 —2 —2 $1 1 = + u (2.6) (I12 2 1 $2 —I It is a controllable system. The desired path is the line m2 = —1, i.e. s 5,3,,(3) = , (2.7) —1 Suppose that the velocity on the desired path is 1, so 3 = t. A time dependent feedback is u 2 2t — 1. (2-8) It is easy to check that the error :r(t) — :rd(t) satisfies e = —2 _2] x1 ] e. (2.9) 2 1.[_$2 It is asymptotically stable. The simplest reference projection for the desired path is the orthogonal projection 7(r) = 171. 24 Therefore, the virtual time is T = 1:1. The control system under non-time based control is $1 —2 —2 $1 1 = + (2131 — 1) $2 2 1 $2 —1 : (2.10) —2$2 —- 1 $2 +1 The second equation in = $2 + 1 is unstable. So, the system does not approach the desired path under the non-time based controller, although the non-time based feedback is converted from an exponentially stable feedback (2.8). On the other hand, if the time-variant feedback is 3 3 u=2t~1+§($1—t)+§($2+1). It is easy to check that the time dependent feedback drives the system approaching xd(t). Furthermore, the non-time based feedback is 3 u(:1:)= 2$1 — 1+ 5(1‘2 +1). The system under the non-time based control is [.1 [am . = l 12 2 (2.11) . __ , 1 [_ $2 [- 2(152 + l ) The second equation guarantees that lim 232(t) = —1. So, the non-time based control drives the system asymptotically approaching :rd. The example implies that not all the stable feedback u(.r, t) can be converted into stable non-time based controllers. In the following, we derive conditions under which 25 the non-time based controller is guaranteed to be stable. 2.3.2 Error dynamics and stability Two error dynamics are derived in this section for the stability analysis. The first error dynamics is under the time-variant feedback u(a:, t). The error of the system i: =2 f(as,u(a:,t)) is defined by e = a:(t) — :cd(t). (2-12) It satisfies é : f(rc,u(:r,t)) — f(xdi “'(xdvtD = f(e + 1rd, u(e + and, t)) — f(e + 33.1, u(e + and, t)). aim The desired trajectory :rd(t) can be considered as a function of time t. The error dynamics is a time-variant system. Define E(e, t) = f(e + and, u(e + rd, t)) - f(e + 27.), u(e + 13.1, t)). (2-14) The error dynamics under u(:r:, t) is é=E@¢) aim Under the non-time based feedback u(.r, T(r)), the error is defined by e = :r(t) — :rd(T). (2'16) 26 The system is not tracking the trajectory as a function of real time. The controller suppose to drive the system toward the desired state at the virtual time. The error dynamics is changed due to the derivative of T(:1:). é = f(x,U($.T)) —fear).u)%§f 0 and a: in a 6 neighborhood of 3:0,. The nonlinear function f (:17, t) represent the rate of change of the state variable. It is reasonable to require in A1 that the speed, acceleration of the state variables of a mechanical system is bounded near the desired path. A1 also assumes that u(a:, t) and its derivative are bound. It implies that the controller can react to the change of state and time variable at a limited rate. Theorem 2.3.1 Suppose (2.15) is uniformly exponentially stable in He” < 6. Sup- pose the system {2.19) has solution fort > 0 and the solution satisfies lim T(:1:(t)) -— 1 = 0 t—roo uniformly and exponentially if ||é(0)|[ < 6. Then, there is a 61 > 0 so that lim E(t) = 0 t—+oo provided ||é(0)|| < (51. Proof. From the converse theorem of stability, there exists a Lyapunov function l/(e,t) aillellz S V(e,t) S azllellz, t2 0, 3V EM] 3 asllell, t2 0. (2.20) av av , _ _ = — > . &+%flm)lw.u0 in a neighborhood |e| < 6. Under the non-time based controller, the error is replaced by 6 in (2.16). The Lyapunov function for e is V(é',T(.r(t))). Its derivative in the 28 direction of (2.18)-(2.19) is War) = gm 96—:(E(é.T)—f(xd,u(ard,T))(T-1)) 3V 6V 6V 8V ° = EFL 8_E(e e,T)+ (ET—T93 (121. (12.13») ("F-1) (2.21) = -||e ”2+ +(gTK—Z—‘ffa. M)< -1> From Al, we know that E (e, t) is bounded in Hell < 6. So, the third equation in (2.20) implies that gift/T V equation of (2. 20), it is known that 33—2: (6, t) is also bounded. Therefore, (6, t) is bounded in 0 S t _<_ co and Hell 3 6. From the second 2%?) — %—‘1f 0 and A > 0, provided [lél' < 6. Equations (2.21) and (2.23)implies V(e,T) < —[Iéll2+llfe"\t 1 (2.24) < —a—V(é,T) + Me‘”, for He“ < 6. 2 At any fixed time to, if E(to) < 6, then - 1 V(e, T) < ——V(é,T) + Me‘Moe‘Mt—m), for t> to (2.25) 012 29 This implies d _ (+2040) _ —).to (-X+;’-)(t—to) (“(6 V(e,T)) 0 so that Me‘M" ,\ _m _ a — (t—to) 0.2“ t0) _1 2 ——_)\ +01—1 (6 +6 ) < 2 6 , (2.29) for t 2 to and ”(all < 6. If [|é(t0)|| < 20—162, (2.20) and (2.28) imply 0t2 al||é||2 S V(e,T() s waitress» + $5? a s a2|lé(to)ll2 + 7‘62 (2.30) (11 2 O1 2 < ——6 —6 _ Q2202 + 2 _<_ 0162. Therefore, ]|é(t)|| S 6 for all t 2 to, provided a lléltolll < 2—162. (2.31) 0’2 30 Thus, (2.28) holds for t Z to if the initial error satisfies (2.31). From (2.28), it is easy to derive that 3111.1) V(e,T) = 0, provided (2.31) holds. Therefore, 11120 E(t) = 0. Because the right side of (2.18) is uniformly bounded in HE?” < 6, there exists 61 > 0 so that ||é(0)|[ < 61 implies (2.31). This proves that lim E(t) = 0 t—roo if new)“ < 5.. 9 2.3.3 Feedback design Given a time-variant tracking feedback u(;1:, t) and a reference projection, one needs to check the performance of the following system with dummy output y“ = T(:c) — 1, :i: = f($,U($,T($))) (2.32) y* = gramme)» —1 If the dummy output y“ exponentially converges to zero, the non-time based control drives the vehicle approaching the desired path. In this section, a design method is in- troduced if T(:r) has relative degree. The feedback guarantees that y“ is exponentially stable. In the following, we assume that the desired path satisfies ydi = 8 = s 31112 yd2( ) (2.33) ydk = ydk(3)- 31 Or equivalently, the first output variable ydl serves as the action reference. This is true for many applications as shown in the next section. A natural reference projection is V(l’) = 311(3) (234) where yl is the first output of the original system (2.1). We also assume that in a neighborhood of the desired path, reference projection 7(:1:) = y1 has relative degree, i.e. 7(23) satisfies a - . aL}(i,u)(7($)) = 01 1 S ’l S 7', a (2.35) where L f(xm) is the Lie operator defined by name» = gig-rm). The following design method follows the idea of feedback linearization (see, for in- stance, [30]). If the desired speed on the desired path is defined by s = v(t), then the virtual time is T = v‘1(7(:1:)). The feedback is defined based on the virtual time T(:r:). It is easy to check that T(:1:) also has relative degree. Define 5,- = L}?1 (T(:1:)) for 1,11) 1 S i S r. The equation (2.35) implies a. = a .52 2 63 (2.36) 5} = b(a:) + alts->211 + (12(2)... + - - - + a...(a:)um, (11(1) a o. 32 The feedback for ul has the form u1(:r,t, 11.2, - -~ ,um) = _a1(:c) (b(.1:) + a1(:l:)u1+ a2(:r)u2 + - - - + am(:z:)um) +k1(€1—t)+ k2(§2 — 1) + (9353 + ' ° ' + krér. (2.37) The gain k1 k2 . .. kr satisfies that 3’ — krsr—l — 19,45"? — — k1 is Hurwitz, (238) s"1 — krsr‘2 — k,_lsr‘3 — - - - — k2 is also Hurwitz if r > 1. (2-39) In this system, the dummy output y“ = T — 1 is completely determined by (2.36)- (2.37), which has the following form £1=£2 (52 2 53 (2.40) G = k1(€1-t)+ k2(€2 - 1) + 16363 + - - ' + krér Because(2.38) is Hurwitz, the control guarantees that both {1 — t = T(z) — t and {2 - 1 = T - 1 exponentially approach zero. If t is replaced by T, the variable 51 — t is constant zero. The closed-loop system (2.40) becomes £1=§2 52:53 {r = k2(€2 - 1) + 16363 + ° - - + krér 33 Because (2.39) is Hurwitz, {2 — 1 = T — 1 exponentially approach zero under u1 = u1(:c, T, U2, - -- ,um). Furthermore, the tracking error yl — ydl(T) satisfies yi — yd1(T) 2 7(17) — v—I(T) = v‘1(T) — v‘1(T) = 0 Therefore, the tracking of y1 is guaranteed. Applying (2.37) to the original system, we have i. : f(x,u1($,t,u2,--- yum))u27” ° yum) (241) For this system, the control inputs are U2, u3, - . - , um and the desired path to track is {yd2(8),yd3(8), - -- ,ydk(s). The feedback for u,, i Z 2, can be designed using any tracking control design algorithms. Then, replace the time variable t in the feedback to obtain a non-time based controller. The stability is guaranteed because (2.37) drives T — 1 approaching zero. To summarize, we assume that yd1(s) = s is the action reference. The first output y1(a:) is defined to be the reference projection 7(23). Assume that 7(cc) has relative degree, then feedback ul is defined by (2.37) satisfying (2.38)-(2.39). The other feedbacks are designed to tracking y,, for z‘ 2 2. The final non-time based feedback is to replace t by T(:z:) in all the feedbacks. It is guaranteed that the non-time based feedback drives the tracking error to zero. 34 CHAPTER 3 TRACKING CONTROL OF THE MOBILE PLATFORM 3.1 An Introduction to the Control of Mobile Platforms This chapter presents a novel method of non-time based controller design for path tracking of moving platforms. The purpose of this proposed controller is to design an event based controller which can coordinate the motion of mobile platform and manipulator arm. In this chapter, the state variables of the mobile platform are adOpted as the motion reference. In the control of mobile manipulators, a global state variable is used as the motion reference of both the mobile platform and the manipulator arm. The key to the proposed control method is the introduction of a suitable motion reference other than time, which is directly related to the desired path and the mea- surable system output. It enables the construction of control systems with integrated planning capability, in which planning becomes a real-time closed-loop process. As a result, the control system is capable of coping with unexpected events and un- certainties. This new design method converts a controller designed with traditional time-based approaches to a non-time based controller using a reference projection. It significantly simplifies the design procedure for a non-time based controller. The method is exemplified by the design and experiment for the tracking control of car-like mobile robots. Based on the properties of the desired path, two car-like mobile robot controllers are proposed. According to the type of the desired trajectory, the time based controller is mapped to a non-time based controller by choosing appropriate motion reference. The controllers are implemented and tested for different trajecto- ries such as sine waves, circles and ellipses. Lab experiment on mobile platform are carried out in an environment with unexpected obstacles and unexpected stop for at arbitrary time. 35 3.2 Path Tracking Control for Mobile Robots 3.2.1 Mathematical Model of the Robot In this section, the general design algorithm is applied to the model of mobile robot shown in Figure 3.1. The rear wheels are aligned with the vehicle while the front wheels are allowed to spin about the vertical axes The robot is a rear wheel driven robot. The system is constrained by allowing the wheels to roll and spin, but not slip. Let (x, y,0, ¢) denote the configuration of the robot, parameterized by the location of the rear wheels. The dynamical model of the mobile robot can be represented as [45]: i: = ulcosél g = u sin6 . ”I (3.1) 6 = jitanqb (IS: “’27 where ul corresponds to the forward velocity of the rear wheels of the robot and U2 corresponds to the angular velocity of the steering wheels, the angle of the robot body with respect to the horizontal axes is 6, the steering angle with respect to the robot body is ¢, (1:, y) is the location of the rear wheels, l is the length between the front and the rear wheels. Let Xd = [ctd(t), yd(t), 64(t), ¢d(t)] denote the desired states of the mobile robot at time instance t, the purpose of the path tracking control is to design a controller ul é u1(Xd,t),u2 = u2d(Xd,t) such that the robot will asymptotically track the desired path yd = f (1.3). By using the reference projection method, the time based feedback laws can be converted into the time invariant feedback 211(Xd, T(-1:)) and 112(Xd, T (13)) The virtual time T(a:) is designed by the non-time based motion reference 5. The path control of a nonholonomic robot has been discussed in the literature. 36 > X Figure 3.1: The configuration of a mobile robot. The system model can be linearized based on certain path and a feedback control was designed using the LQR approach[22]. The disadvantage for this class of methods lie in that the controller is path specified. The controller is dependent on the charac- teristics of the path. Exact linearization approach, such as the chained form, has been developed and used in the open loop path planning [45] and feedback control design [21, 60]. However, singular points exist in the derivation of chained form, the feedback controller based on the chained form therefore cannot track certain path. In this section, two classes of paths are discussed. One is curve without U-turns, an exact linearization and feedback control design similar to chained form is described. For a curve with U turns, an exact linearization is presented and feedback controller is designed. Both the time based controllers are converted to non-time based controllers using the reference projection approach. 37 3.2.2 Tracking Curves with no U-tum Suppose that the desired path is the following function of time :6 = wt), 31 = yd(t)- In this section, a desire path without U-turn is assumed, i.e. for any value and, there is at most one value yd such that (:L'd,yd) is on the desired path. This implies that id(s) > 0. Let’s first find a feedback for ul. Since d —(a: — $01) = ulcosfl — std, dt the control input ul can be defined by u1 = —1—(:'cd - (a: —— 1:01)). (3.2) Under (3.2), the first equation of (3.1) becomes i(:1: — 23(1): —(:23 — 13d). dt Its solution asymptotically approaches to zero as t ——> 00. The following change of coordinates is defined to achieve the linearization of the 38 last three equations in (3.1), 21 = y 22 = sin6 z3 = leosOtangb 1 m (3.3) 01 = _27 sin 0(tan W2 [3 __ c050 1 — lcos2¢ U: (11+B1U2 The last three equations in (3.1) are transformed into 21 = ”“122 22 = U123 (34) 33 = ’0 Define €1=Zi—yd 82:22—%{ (3.5) 1 1d yd 6 =2 ———— 3 3 Uldt 11.1 the error dynamics of the last three equations is represented by B1 = U182 é2 = U163 (36) é _ v d 1 d E 3 — dt uldt ul The right side of the last equation in (3.6) satisfies (1 1 d yd _ Jd _ 3.2/am + 3mm _ WI (3.7) —__— — 3 _ 2 3 4 : dt uldt ul “'1 a1 “-1 u1 39 Notice that ill contains the term (f), which depends on 112. Therefore, the expression (3.7) has the form a2($9y363¢9 t) + fi2($3yi61¢’7t)u2 (3'8) To stabilize (3.6), v is defined by ’U = a2($iy36)¢at) + [82(xvy363¢at)u2 (3.9) +U1(0181 + 0.282 + (1383) Then, (3.6) is equivalent to 81 = 21.182 é2 : 11.183 (3.10) 83 = U1((1.181 + 0282 + (1383) Select the values of a,,z’ = 1,2,3, so that all the eigenvalues of l010 A=001 (11 a2 03 d are on the left half plane. Since :isd(s) > 0, ul (t) > 0 in a neighborhood of (:rd(t), yd(t)), fot u1('r)dT increases. The solution of (3.10) is e(t) 2 e0 exp(A/0 u1(T)dT). Since A is stable, tlime(t) = 0 as fot u1(T)dT increases. Therefore, y — yd(t) ——) 0 when t ——> 00. By solving for 71.2 from (3.9) and the last equation in (3.3),u2 can be found. 40 The feedback is _ (I2 — 01+ 01(0181‘l‘ 0282 + 0383) u 3.11 2 a — a l ) In summary, suppose the desired path is $d(v(t)) = t. yd(v(t)) = $th (3-12) The non-time based control ul and 112 can be computed as following id - (:17 - (L‘d) a1 : cos6 ’ u—a2—a1+ ul (ae+ae+ae) (3.13) 2 a-a a—a “ 22 33 The coefficients a1, a2 and a3 are obtained from the linear control design. Remark. A circle or an ellipse does not satisfy the condition id > 0. 6 = 3:32: are the singular points of the system. The controller (3.11) does not apply. This case is discussed in the next subsection. <1 The feedback (32,311) depends on time. However, it can be converted into a time- invariant feedback using reference projection as proved in § 2.2. Using the method developed in § 2.2, the feedback (32,311) is changed to time-invariant feedbacks for the tracking of the curve y = sin :r. The key step is to find a motion reference for the task. The'design method proposed in this dissertation is not based on a specific motion reference. It works for arbitrary choice of the reference. Different from circles and lines, arc length and orthogonal projection are not the best choice if the desired path is y = sin 1‘, because the computation of arc length for this curve is complicated. In this case, it will be shown in § 3.3 that s = a: is a much simpler choice of the action reference. The lab experiment based on this controller is shown in § 3.3. 41 3.2.3 Tracking Circles and Ellipses As remarked in the previous section, the controller (3.13) fails to track trajectories having U-turns. For a class of trajectories such as circles and ellipses, another con- troller is proposed in this dissertation for the mobile robot. Based on the same state equation (3.1) and the desired trajectory ($d(t), yd(t)), the system is first transformed into a polar form. A controller is then derived based on the polar axis. By defining mm 212=tang :c (3.14) which is equivalent to a: = rcoszl) (315) y = r sin 11) the system dynamic model in polar axis is . 1 . . r = ;(m‘ + 313/) 1 = ;(r cos will cos 0 + r sin 21ml sin 6) (3-16) = ul cos(0 — 111) 1 gm — (by 1 + (31/102 $62 1 = T—2(u1 sin 0r COSI/J — ul cos 6r sin 21') (3-17) U1 = sin(0 — 1(1) 7. 42 System model (3.1) is then transformed into ”(A = % 3111(6 — 1(1) 7': = :1 cos(9 - 10) (3.18) 6 = 71 tan (12 ct}: w The desired trajectory in Cartesian space is also transformed into polar axis and denoted by rd, 112d, 6d, 96,) respectively. Considering only the first equation of (3.18), ul is designed to stabilize i/J. By defining 7‘ . = mWa - kiw - 1%)), (3-19) 111 where k1 is greater than zero, the first equation (3.1) becomes in - w.) = —k1(w — a), (320) which is asymptotically stable. Substituting (3.19) into the second equation of (3.18) and defining e, = r — rd, it becomes i~ = r ctan(6 — maid — k1(zp — 31.1)). (3-21) Taking ctan(6 — #1) as the control input of this equation, a feedback controller which stabilize (3.21) could be 1 ctan(9 _ 1(1) : f(d’d _ [910,0 — 7(3)) (Td — k28r). (3.22) 43 where k2 > 0. Therefore, 6 = 11) + arcctan( . 1 (7'31 - (926%)) red - (WW) — 1a)) (3.23) = 910311515) For the third equation of (3.18), defining e9 = 6 — 91, the error dynamics of 0 can be derived, 9 = :ul—ltanqb 11.1 l (3.24) tan 03 — 91. Taking ()5 as the control input, the feedback controller for (3.24) is (b = arctan(ifih — k389)) lsin(0 - 11)) f(lbd — (91(1)!) - 1%)) :2 92(T, 0, 1,0, t). = arctan( (g. — k3e9)) (3.25) For the fourth equation, let 64, = (b - 92, 62¢ 2 112-95 (3.26) the controller of the system is finally designed by 112 : 92—k4e¢ (3.27) 44 To summarize, the time depended feedbacks of system (3.18) are r . u. = ma. — W — a» U2 = 92 — (“46¢ sin(6 — w) . (328) = . — k 92 arctan(rwd _ 191(1/1 _ 1%)) (91 389)) 91 = ’1/1 + arcctan( . 1 (7'11 — Merl), TWM — k1“? “ 11101)) where khi = 1, 2, 3, 4 are greater than zero. The feedbacks in (3.28) are also time dependent controllers. However, it can be converted into a time—invariant feedback using reference projection as proved in § 2.2. Since the design method proposed in this dissertation is not based on a specific motion reference, different references according to different curves are used. As it is shown in § 3.3, a combination of the state variables can be used for convenience. 3.3 Experiments and Results The non-time based controllers have been implemented on a Nomadic XR4000 mobile robot. The mobile robot has four wheels. Each wheel can be individually steered. However, in order to implement the above control scheme, only two front wheels are steered in the experiments. The other two wheels maintain Straightforward config- uration. The distance between front and rear wheels is 0.3053 m. In the following subsections, three experimental results are given. In § 3.3.1, the results of controller (3.13) are shown. In § 3.3.2 and § 3.3.3, the results of controller (3.28) are reported. 3.3.1 Tracking Sinusoid Waves The desired trajectory is given by: yd = sin(s) 1rd = s, 45 Figure 8 (Y Trajectory) Home O (X Trajectory) 0.8' 7. A . get- i . 25 E . 151.». § § 23* E - 32* 2 B, , -o.e 4 4 4 . o - - ‘ 0 20 40 60 80 O 20 40 60 80 ((Soc) f(Sec) FigurecWEwor) “04 medMEmr) 005 f ' ' 7 ' ‘.' .' o». 6» -o.05» 5»~ - 1 E —o.1) EA» . r r é-OJS) 331 -0.2* 2 - —0.25*~ , 1’ ‘1 -o.3 - A . o 4 - - 0 20 40 60 00 O 20 40 60 80 t(Sec) f(Sec) Figure 3.2: Tracking a sinusoid curve with initial error. where s is the motion reference. The velocity along the desired path is defined by s 2 vi, where v is a constant. Notice that the desired velocity is a constant in the direction x. But the desired velocity on the sinusoid wave is not a constant. This change of desired velocity can be tracked by a non-time based controller using the reference projection method from § 2.2. In the controller, the closed loop poles are assigned to be —3, —5 + i, —5 — i. The reference projection is s = x, and the synthetic time is t = $23. It is easy to check that the relative degree of the reference projection is 1. Therefore, the feedback in § 3.2.2 satisfies the design requirements of § 2.3.3. It is converted to a non-time based controller and then applied the N ormadic XR4000 mobile robot. If the initial position of the robot is not on the desired path as shown in Figure 3.2, the controller is able to command the robot to approach the desired path. There exists an initial error of 0.3m in the experiment. But the tracking error converges to zero after the robot starts moving. This experiment demonstrates the ability of the controller to rapidly overcome initial tracking error. It is worthy noting that the XR4000 is a holonomic robot, and it is simulated as nonholonomic robot in the experiment. 46 Figure a (Y Trajectory) Figure b (Trajectory of X) 3.. E “2.5 3. i , go's» E 2}. ./_/.... ‘O X15 .fl ,. S if I g o. g 1,. l/ E E _ _____ ~ 31 :05 .-’_., —o.5 o i 20 0 1O 20 30 N300) ((Sec) Home“ Traiectory) Figuredot Trajectory) 750.4 E 3» )5: 0.2’ >525» K 0“ Y 2 54,2... 5,5 3 ' i~ 3' E : 5' -°_8b . ”0.5’ -O.8 ‘ 0 O 10 20 30 40 50 0 1O 20 30 40 50 ”$06) t(Sac) Figure 3.3: Time based control vs Non-time based control. The interesting result of the experiment is presented in Figure 3.3. While tracking a pre-planned desired path, the robot was stopped by an unexpected obstacle at the 10th sec. Figure 3.3a and 3.3b show the results of a time-based controller. Its desired input was a function of the time. Therefore the desired position would keep changing even the robot was blocked. As a result, the tracking error kept increasing until the system became unstable. However, for the non-time based controller, the results are completely different. As shown in Figure 3.30 and 3.3d, when the robot is blocked, the action reference remains constant because the reference projection depends on the position of the robot. Therefore, the tracking error remains constant. Furthermore, after the obstacle is removed at the 20th sec, the robot automatically resumes the original motion without re—planning. This experiment demonstrates the capability of the non-time based controller for coping with unexpected events. It should be noticed that the non-time based controller is not simply replacing the time by a non-time based motion reference. The new motion reference is determined by sensory measurements. It reflects the key information of the state of the system. The 47 reference projection acting as an on-line real—time planner. 3.3.2 Tracking Circles This experiment is done under the control of (3.28) with feedback gains k1 = 1, k2 = 0.4, kg = 8, k4 = 3. The desired path is 2 2 2 (rd + yd=a yd = asin(s),:1:d=acos(.s) where s is the non-time based reference. It represents the angle corresponding to the tracked arc of the robot. The reference is computed directly by the location of the robot, (2:, y). In the polar coordinates model of the vehicle, the reference projection has relative degree 1. Given a constant relation between the reference 3 and the time t, s = at, the synthetic time is t = iatan2(y,:c), where atan2(-, ) maps (y,:r) to (0,27r). The radius of the circle in the experiment is 0.5m. Figure 3.4 and 3.5 recorded the experimental results of tracking a circle by the controller ( 3.28 ). From Figure 3.4a-3.4d and Figure 3.5a—3.5d, it can be seen that the static tracking error is less than 1mm. 3. 3. 3 Tracking Ellipses This experiment is also done under (3.28). To track an ellipse, the desired path is given by E: + y_3 1 a2 b2 48 Figure a‘ Actual and Desired Trajectory Figure b: X and Y Trajectory 0.6 A . j 06 ' 3’ ." A E E 0.4 - x w 1% 0.2 - 0 —0,2 -0.4 - ' -06 1 ~ , -0.5 0 0.5 X0") «3) Figure c: X Error Figure G: Y Error 0.01 . - - 7 001 - fi r E E . ; . o W _0005 . _0005 ................................. .. -°°1 * ~o.01 A - 10 20 30 40 10 20 30 40 t(s) t(s) Figure 3.4: Tracking a circle without initial error FrmreaActualandDesiredTraiectory Figure b: XandYTraiectory 0.6 , 0-6 T A \ ’5‘ 0.4 E 0-4 ‘ 3' I? 0.2 0.2’ - r 0 o ...... —0.2 -O.2 ‘ .04 '0-4’ ..... , ..-. . .... . .4 -0.5 ' ’ ' _o 6 . . . . A _ . . ‘ . -0 5 0 0.5 0 10 20 30 40 x(m) «3) Figure G: Y Error 004 r i T A I I A ‘ A > - L i _o M A A A O 10 20 30 40 0 10 20 30 40 K3) 1(3) Figure 3.5: Tracking a circle without initial error where s is the non-time based reference. Instead of the original state variables in the system, 3 = w is chosen as the motion reference in this experiment. It is computed by the measured output :5 and y by d) = atan2(y, at). For simplicity in the lab experiment, a constant relation between the reference and the time is chosen, though a variable 49 Figure a; Actual and Desired Trajectory 04>- Y0“) 0.2 ‘ 0 -0.2 —O.4 ~~ -O.6 -0.5 0 0 5 x(m) Figure c: X Error 0.01 r v , v (m) t(s) (m) Figure b: X and Y Trajectory 10 20 30 40 t(s) Figure d: Y Error f «3) Figure 3.6: Tracking an ellipse without initial error Home a- Actual and Deeimd Trajectory 0.3 E My, _ 0.2 ~ 0 -0.2 . .04 -0.6 -o.5 o 0.5 X(m) Figure c: X Error (m) 0 10 2O 30 40 t(s) IWU") (m) 0.6 0.4 0.2 > —0.2 -0.4 . —06 Figure b: x and Y Trajectory «3) Figure (1: Y Error A A A 10 20 30 40 t(s) Figure 3.7: Tracking an ellipse with initial error relation depending on the curvature of the path would be better. The relation is defined by 112 = vt. The synthetic time is t = £21). It should be noticed that the motion reference here is not a existed path parameter of the ellipse,i.e., the relation, :1: = acosr/2,y = ()8in are not true. 50 Figure a: trajectory with respect to s Figure b: trajectory with respect to time . . . 1 . a 1 0.5- . 0.1V, x(solid).y(dotted) x(sotid).y(dotted) _0_5...... . . ‘ _1 .9 i i _1 ; ; 0 2 4 6 8 0 20 40 60 motion refemeoe t(8) Figurerxcontrolutwithreepecttotime Figuredzcontrolu2withreepecttotime f 0.6 r. . v ‘3 0,4 0.2 0 2‘0 ab so t(s) t(s) Figure 3.8: Tracking an ellipse with present of unexpected obstacle To test the control law on a trajectory with large curvature, a small ellipse with a = 0.6m, b = 0.5m is used. The feedback gains are selected as k1 = 1, k2 = 0.4, kg = 8, k4 = 3 in the experiments. From the experiment result (Figure 3.6 and 3.7), it can be seen that the robot approaches the desired trajectory asymptotically and the static tracking error is bounded by 5mm. An interesting result is shown in Figure 3.8. A unexpected obstacle occurred at time t = 16sec. When an obstacle is detected by the laser sensor equipped on the mobile robot, it is seen that the robot is slowed down. This is done by relating the motion reference to the distance with respect to the obstacle. Instead of using a constant 220 to relate the motion reference and time t, the distance from the obstacle to the robot, (1, is also considered into the relation. For simplicity of the implementation, three situations are considered. When d is large enough, the distance to the obstacle is not considered in the design of the motion reference. When d is small enough, 8 will keep constant to suspend the system. It is scaled down gradually when the obstacle is approaching. Of course it can also be designed as continuous function of the distance 51 to the obstacle. From figure 3.8(a) and (b), it is seen that the mobile robot almost stopped when an obstacle is close. When the obstacle is removed gradually, the robot resumes its action and the originally path is tracked. Figures 38(0) and (d) show the control commands. This experiment has demonstrated that the relation 3 = e(t) can be designed to accommodate the sensor information without any effect on the design of the time based controller. 52 CHAPTER 4 SINGULARITY ANALYSIS AND SINGULARITY-FREE CONTROLLER OF THE ARM 4. 1 Introduction One advantage of the mobile manipulator is that the kinematic redundancy can be used to avoid singular configuration of the manipulator arm. The purpose of inves- tigating robot arm singularity is to ensure the robot arm dexterity by positioning the mobile platform and/or ensure the stability of the robot arm while approach- ing singular configurations. This chapter first analyzes singular configurations of the manipulator arm, and then presents a hybrid system design of robot arm con- troller for singularityless robot motion control. To ensure feasible robot motion in the neighborhood of kinematic singularities as well as at the singularities, a hybrid robot motion controller is designed to integrate joint level control and task level con- trol. The hybrid control system has a two-layered hierarchical structure, a discrete layer and a continuous layer. The workspace is partitioned into subspaces based on the singular configurations of the robot. Switching between continuous controllers is involved when the robot travels across subspaces. The discrete layer is comprised of the switching logic and the continuous layer includes the controllers used in various subspaces. With the hybrid controller, the robot not only can pass by the singular configurations, but also can go through and stay at the singular configurations. The stability of the hybrid system is analyzed using Multiple Lyapunov Function theory. The singularity-free motion control for mobile manipulators will be presented in the Chapter 5. The existence of robot arm singularities can be seen and analyzed from its task level controller. The dynamic model for a robot arm with n joints in the joint space 53 can be written as D(q)i1' + C(q, (i) + 9(a) = u, (41) where q = {q1, q2, - - - ,q,,}T is the n x 1 vector of joint displacements, u is the n x 1 vector of applied torques, D(q) is the n X n positive definite manipulator inertia matrix, C(q, q') is the n x 1 centripetal and coriolis terms, and g(q) is the n x 1 vector of gravity term. A robot task can be defined in either the joint space or the task space. Robot controllers can be designed accordingly. For a robot task specified in the n dimensional joint space, denoted by qd, 4", if, a computed torque controller at joint level can be formulated as: “'1 : D(qqu + KquqZ + KPqeql) + C(Qa‘l) + 9(‘1)a (42) where Kv and K p are feedback gain matrices, e 1 = qd — q, and e 2 = rjd — (j. The ‘1 q q ‘1 error dynamics in joint space can be described by éql ___ W (4 3) éqg = —qu8q1 — ququ. It is straightforward to prove that system (4.3) is asymptotically stable for appropriate gain matrices 1(1),),qu [6]. Irrespective of the current robot configuration, the system (4.3) can track any reasonable trajectories given as qd, 4", if. In other words, controller (4.2) is valid in the entire workspace. However, a robot task is generally represented by the desired end-effector position and orientation, a robot controller in task space is therefore desirable. To develop a task level controller, the robot model (4.1) needs to be represented in the form of task level variables. Let me = {2:1, 2:2, - -- ,xm}T represent the position and orientation of 54 the end effector in the task space, the relation between joint space variable q and task space variable me can be represented by me = he(q), where h denotes a vector consisting of m scalar functions. In general case, m = 6 and me E W" under certain assumption. The n dimensional joint velocity and m dimensional end effector task velocity can be related by the Jacobian equation: is = e(j, where Je is the Jacobian matrix. For a nonredundant robot, n = m, Je is a square matrix and the relation between joint level and task level variables is quite straightforward. For a kinematically redundant manipulator, n > m, which means the manipulator has more degrees of freedom than necessary to execute a task. The concept of extended task space for kinematically redundant manipulators was introduced in [12]. Based on the kinematic redundancy resolution algorithms, an independent constraint task vector 1:6 = hc(q) E Rn‘m can be chosen. Therefore, for a redundant robot, the task involves not only the end effector motion, but also the achievement of additional performances by utilizing the kinematic redundancy. In the mapping 236 = Jc (j, the constraint task Jacobian matrix Jc can be chosen as a matrix consisting of the orthonormal vectors that span the null space of JC. The Jacobian equation in the extended task space can then be TxTT 83C represented by i‘ = Jq', where a: = {x is task level variables in the extended task space and is the Jacobian matrix in the extended task space. It is worth noting that .1le = 0[12]. In the extended task space, the redundant manipulators can be treated as nonredundant manipulators. Therefore, the acceleration in task and joint space can be related by :‘c' = id + Jr'j. Considering (4.1), an robot dynamic model in the term of extended task space variables can then be described as follows: DJ‘ICE - 1(1) + C(q, (i) + g(q) = u (44) 55 where the Jacobian matrix is n x n. Given the desired path in task space, and, the robot controller in task space can be described as: “*2 : DJ‘lci‘d — Jq + KVxer2 + KP1811)+ C(qr q) + g(CI), (45) where 8:1 =$d—ar,e$2 zid—i. The error dynamics in task space can be described as: é:rtl = 6:2 (4.6) écr2 : —KP:re;rl — erel‘2) It is also easy to show that system (4.6) is locally asymptotically stable for appropriate gain matrices K p1, K van [6]. 4.2 Robot Arm Singularities By comparing the controllers in (4.2) and in (4.5), it can be seen that the existence of controller (4.5) depends on the existence of J ‘1, while (4.2) does not. At the singular configuration or in its vicinity, the Jacobian J is singular or ill—conditioned. The determinant of J ‘1 could be very large, which will result in unreasonable large joint torques. The robot configurations such that |J| = 0 are called robot singular con- figurations. Theoretically, the robot singular configurations and their corresponding analytical singularity conditions can be obtained based on the analysis of the Jaco— bian J. To better understand the kinematic redundancy and singularity, consider the Singular Value Decomposition(SVD) of the matrix J J = UZVT = Zaiuwf, (4.1) 1'21 56 where U is the (n x n) orthonormal matrix of the output singular vectors ui, V is the (n x n) orthonormal matrix of the input singular vectors 2),, and E is a diagonal matrix whose elements are the singular values of J, 01,02,--- ,0”. At a singular configuration, one of the singular value a,- is 0. Thus, u,TJ(q) = uTUsz = 0. This means that motion in the joint space will result in zero motion in the output direction u, of the task space. This direction in the task space is called degenerate direction. Each singular configuration of the robot can be represented by a singular condi- tion in terms of the robot parameters and joint variables. The singular conditions can be used to check the proximity of robot configuration to the singular configuration. Since singular configurations may varies for mechanical structure, the singular con- ditions of manipulators are analyzed based on three classes of mechanical structures: (a) nonredundant manipulator with spherical wrist; (b) nonredundant manipulator with non-spherical wrist; (c) redundant manipulators. Corresponding to the singular configurations, a vector of singular conditions, denoted by I‘, can be derived. 4.2.1 Non-redundant Robot with Spherical Wrist A general task for an end effector motion requires six degrees of freedom, therefore, m = 6 can be assumed. For a 6-DOF robot manipulator, which consists of a 3—DOF forearm and a 3-DOF spherical wrist, J 2 J6 is a 6 x 6 matrix. Due to the mechanical structure of the spherical wrist, the motion of the position and orientation can be decoupled. As a result, the Jacobian can be decoupled in such that the singular configurations resulting from the arm and the wrist are separated. According to 57 [15],[67], the Jacobian J for a PUMA-like robot can be decoupled into: I W Je = 0 I 0 \I’ . - .. .J I W 0 I 0 \Il _ - L - where Jw = J11 0 ) J21 J22 ' _SOSA 608A CA CA ‘1’ = —c0 —30 0 30 Co . CA CA 0 d5 ‘ 8}; W 2 —d6 ° 6],; 0 L d6 ' 6, —d6 ' 61' —d481823 — 018102 — 0331023 - d261 J11 — d4c1323 + a1C1C2 + a3c1c23 — r1231 0 58 0 J22 J11 J21 Jw _d6'ej de-e, 7 0 d401023 — 020132 — (1301823 d481023 — 028182 — 0351823 —d4823 — 02 — a3€23 61(d4623 — 03823) A 81(d4623 - 03323) , —d4 323 — £13623 0 —81 —81 J21: 0 —c1 —c1 , 01823 —01€2384 " 8104 016402385 — 818485 + 6132305 J22 = 81823 —8162334 — €104 3104C2385 + C18485 + 81323C5 a C23 32334 C2305 — 823C485 er = 01(0236435 + 82365) - 318455, 6]- : 31(C23C485 + 82365) + €134.95, 8k = C2365 - 8230485- Here s,-, Q, 5,,- and a,- stand for sin(q,-), cos(q,), sin(q,- + q,) and cos(q.- + q,-) re- spectively. a2,a3,d2,d4 are robot joint parameters, (0,14,T)T denotes an orienta- tion representation (Orientation, Attitude, Tool angles) [67]. Since the singularity of \II caused by CA = 0 can be overcome by choosing other definition of the angle representation([76],[26]) and W is not singular, the singularity analysis can be ob- tained by checking Jw. Observing J11 and J22 of Jw, it can be seen that J11 involves only q1,q2,q3, which correspond to the arm joints, and J22 involves only q4,q5,q6, which correspond to the wrist joints. Singular configurations caused by J11 = 0 are called arm singularities and singular configurations caused by J22 = 0 are called wrist singularities. Since the determinant of J11 is (d4c3 — a353)(d4323 + 0,202 + a3cQ3), two singular configurations can be obtained. One is called boundary singular configuration, as 59 (b) Interior singular configuration (c) Wrist singular configuration Figure 4.1: Singular configurations of Puma 560 shown in Figure 1, (a), which corresponds to: 7b = d403 — a333 = 0. (4.3) This situation occurs when the elbow is fully extended or retracted. When fully extended, there is no way to move the end effector forward, this is the degenerate direction. The other situation is called interior singular configuration, as shown in Figure 1, (b), the corresponding singular condition is 75 = £14523 + 0202 + (13023 = 0- (4.4) In addition, the wrist singularity, as shown in Figure 1, (c), can be identified by checking the determinant of the matrix J22. The wrist singularity happens when two 60 wrist joint axes are collinear and the corresponding singular condition is 7112 = _35 = 0- (4.5) Therefore, a vector of singular conditions for a PUMA-like non-redundant arm with spherical wrist can be given as: d463 - 0383 d F = d4823 + a2c2 + a3c23 By checking the value of singular conditions ’7 E F, one can decide the proximity to the robot singular configurations. Denoting the manipulatability as v J JT , it can be seen that \/ J JT is proportional to I71; - 7,- - 7w]. 4.2.2 Non-redundant Robot with Non-spherical Wrist A robot manipulator with non-spherical wrist generally does not have a decoupled mechanical structure, though constructing a non-spherical wrist is easier. The inverse kinematics of a manipulator with non-spherical wrist generally cannot be computed in closed form and numerical computation needs to be applied. However, the singular configurations of a non-redundant robot with non-spherical wrist can also be obtained by considering the Jacobian [72]. For instance, the determinant of the Jacobian of a manipulator with non-spherical wrist can be derived as given in [72]: IJI = ka2a35385(a1 + (1262 + 0.3623 _ (158234), where a1, a2, (13, d5 are robot parameters. The singular configurations can be decou- pled into three kinds of singular configurations: (i) shoulder singularity, (11 + (12C2 + 61 a3c23 — (153234; (ii) elbow singularity, 33 = 0; and (iii) wrist singularity, s5 = 0. The singular condition vector is represented by r- -j 01 + (1202 + 03023 — d53234 F: 83 7 85 which is used to check the proximity to the singular configurations. 4.2.3 Singularity Analysis for a Redundant Robot Generally, a redundant manipulator which has more than necessary degree of free- dom is more flexible and dexterous than a nonredundant manipulator, though the singularity analysis of a redundant manipulator is more complex. A singularity could be caused by the singular value of .18, which is called a kinematic singularity, or the singular value of JC, which is called an algorithm singularity. For a redundant manipulator with spherical wrist, the singular configuration of robot arm can be obtained by checking the Jacobian matrix J8 [14]. The position singularity and orientation singularity can also be decoupled due to the mechanical structure. Some of the configurations are boundary configurations, which are non- escapable, whereas some are interior singularities, which are escapable [5]. However, a singular condition vector P can still be obtained by analyzing the analytical repre- sentation of the Jacobian matrix [14]. For a given constraint task, the singular configurations of Jc can also be determined in advance. However, with a full rank Je and JC, there are robot configurations for which J = [JZ, JZ]T loses rank. All these singular configurations caused by the choice of JC are called algorithmic singular configurations. For a given Jc, the algorithm singular conditions can be determined. Therefore, given a redundancy resolution algorithm and an extended task space, the singular condition of matrix J can be 62 determined. 4.2.4 Workspace and Singular Conditions The workspace of a robot, which is a complex volume calculated from the limit val- ues of the joint variables, can be described by two concepts: reachable workspace and dexterous workspace. A reachable workspace is the volume in which every point can be reached by a reference point on the manipulator end effector. The dexterous workspace is a subset of the reachable workspace, and it does not include the singular configurations and their vicinities. The dexterous workspace is therefore a volume within which every point can be reached by a reference point on the manipulator’s end effector in any desired orientation. Unfortunately the dexterous workspace is not a connected space. The singular configurations partition the dexterous space into dis- connected subspaces. For example, a wrist singular condition (4.5) of a nonredundant robot can be satisfied at almost any end effector position in the workspace. Thus, for almost any position in the workspace, there exists an orientation of the end effector which satisfies 7,” = 0. A robot path may therefore is composed of several segments of paths in the dexterous subspaces and several segments of paths in the vicinity of singular configurations. In the dexterous subspaces, there is no problem to control the robot at task level. When the task requires the robot to go from one dexterous subspace to another dexterous subspace, the robot end effector needs to go through the vicinity of singular configurations. At these configurations, the manipulator loses one or more degrees of freedom and the determinant of J approaches zero. In order to generate a reasonable motion in task space, the joint motion constraint may be violated. Based on the above analysis, the robot workspace, denoted by Q, can be divided into two kinds of subspaces, the dexterous subspace denoted by $20 and the vicinity of singular configurations denoted by {21 and (22. ()2 specifies a smaller vicinity of the 63 singular configuration. The definition of S21 and {22 are given as follows: 92={q€§R",7€I“I |i| a > 0. Figure 4.2 shows an illustrative plot of the subspaces. Figure 4.2: Illustrative plot of the robot workspace and subspaces It is worth noting that 90 U 01 U 02 = Q and S20 f] {21 fl 92 = O. A A region called dwell region is defined in 91 to avoid the chattering when switching occurs. The definition of a dwell region is given as: A={q€§ft",7€f‘| a<|7|-l - fl (4.11) w where m,1(t) is the switching condition. The inverse of J based on Jf is called J#. Here diag(v) denotes a matrix whose diagonal elements are vector v, and the remaining elements are zeroes. It is worth noting that (4.11) is different from (4.4) to reflect the characteristic of Puma-like robot. The controller of the robot in subspace $20 and 521 can be synthesized by: U3=DJ#(id—jq+KVIeIQ+KPIeII)+C+g' (412) Here a: = {191, pg, Pz,0,A,T}, where (p,,,,py,pz)T denotes the position of the end- effector in the task space. Substitute (4.12) into (4.4), the error dynamics of the system can be obtained as: 6.1-1 : 61'2 _ (4.13) 612 = (I - JJ#)($d — JQ) — JJ#(KPI€J;1 + erexg) To analyze the stability of the system, the term J J # in the error dynamics (4.13) is a key component. Considering (4.2), (4.7) and (4.10), .]J# can then be simplified as 70 follows: k = J J# ' T d, 0312 a; 03 a: cg 03, = lag 2 2 2 2 2 2 01 + ml 02 + mg 03 + mg 04 + m4 05 + ms 06 + m5 Defining . 0,2 km,” = min 2 , (1,1 0": + mi it can be seen that km," becomes smaller and smaller when the robot configuration is approaching singular configurations. The stability of the controller (4.12) in subspace 90 and 01 depends on k. In subspace {20, msl = 0, k = I , the error dynamics (4.13) in region (20 becomes érl = 612 (4.14) éz2 = -KP:re.rl — KV1'612 It is obvious that system (4.14) is asymptotically stable in subspace $20 for apprOpri- ate gain matrices. While in subspace {21, the error dynamics of the system can be described by éxl = 63:2 (4.15) (2,2 = (I — wad — Jq) — k(prex1 + erexg) According to the discussion in [11],[4l], a candidate Lyapunov function of (4.15) cab be defined as l V = ~2-[eflKerl + (,uexl + e12)T(,uerl + eI2)] (4.16) where K is is positive definite matrix and ,u is positive constant. The derivative of 71 the candidate Lyapunov Function can be derived by V = efiKes + (#611 + 8x2)T(#€I2 + (I — W — Jr) — when + Khan» = -—p6:1kKPreIl — 8:2(kKVI — ””812 +(uex1 + e12)T(I -— k)(:’td —— Jq) + 9:10 — k)(pr + ,qux)ex2 Here K = K P15 + qul. — #21. a can be chosen such that K v: — ,uI and K are positive definite. Denoting vector v = Jq, an upper bound of the vector term can be given by[11]: 7' llvll S -2—(vfu + 2vmllex2l| + llex2ll2), min where ’UM denotes the maximum norm of the desired end effector velocity, om," denotes the minimum of the singular value of matrix J, and 6 7" = max ZHNJP, q i=1 1' 8q and J,- is the ith row of matrix J. Thus where N, = V < _kmin)‘minllean“2 +(1_ kmin) ' ||€x||(k1 + k2||exll + k3llexll2) = —(1 — 6)k,,,,-,,A,,,,An|leiII2 + (1 - kmin)' ||€x||(k1+(k2 — k4lllexll + k2l|3xll2l = -(1-0)kmm/\mmllezll2. v (k1+(k2-k4)llezll+k2|lexll2)$0- where r 2r k1: aM + \/1+j1.2 22,2”, [:2 = (/1+11202 UM + “(I —- k)(KpI +#sz)||, 0.2 min min 72 T akrnin Amin k3 = WUQ 1 k4 :- 1 min 1 — kmm and Am," is the minimum singular value of uK p1 and KVJ, — pI/km,,,. It can be shown that (k1+(k2 — k4)||e1[| + kglleIIP) g 0 can be satisfied by regulating k4. From the above derivation, it can be seen that the stability depends on the value of km". In subspace 520, km," 2 1, V is negative definite and the system is asymptotically stable. In subspace S21, kminAmm — (1 — lam-"MW S 0, the system is uniformly asymptotically ultimate bounded. The bound is (1 where (1 < (2 are the root of km,nAm,-,, — (1 — mam/17? = 0. However, in subspace (22, the value of km," is close to zero, and —km,nAm,A,.| leg,“2 + (1 — lam-n) - ||ex||(k1 + k2||e1|| + k3||ez||2) may be greater than zero. The system sta- bility can not be ensured. When the robot approaches singular configurations, i.e., in region (22, some of the elements of k become very small. Due to this, the corre- sponding elements of the gain matrices become small. Furthermore, at the singular configurations, lam-n = 0. Assuming 0,- is zero, the ith element of (4.13) can be written as: 6:11 = 8:21 651-21“: 513:1— («1(1): It can be seen that the system is an open loop system. Though it can be proved the system is ultimately bounded[41], large errors in the task space are expected in the vicinity of singular configurations. Controller design in subspace 92 To achieve stable control in region 92, a switching control is needed so that joint level controller (4.2) can be enabled in {22. Two discrete variables, m7 and ma, are defined 73 to represent arm and wrist singularities respectively, m7(t) = m7(t—)sgn(ai+6—l7'b|)®sgn(0b+6—l7bl)®0 EB sgn(ai _ Ira/1]) ® sgn(ab _ Irybl) 69 0 (4.17) m(t) = m(t—)“’g“("“'+“"l“’D630 EB sgn(aw - In!) 69 0- Here m7(t‘) and m3(t‘) represent the values of my and ma before a time instance t respectively. a,, a), and aw are used to specify subspace $22. In this design, a and 6 take different values for different singular configurations. Depending on the values of 7b, 7, and 7w, m7 and ms will determine what kind of continuous controllers should be used. It is worth noting that in region A, the values of m7 and ms not only depend on '75, 7,- and 7,0, but also depend on the previous value of m7(t‘) and m3(t‘). The A region, which is defined in (4.7), is called a dwell region. It is introduced to avoid chattering along the switching surface. In other words, the switching surfaces for switching into joint level control and switching out of joint level control are different. After the controller switches into joint space control, it can not switch back to task space control unless the robot reaches a larger region 02 U A. This strategy can effectively avoid chattering and ensure the system stability. A switching matrix, 77132, is defined based on m7 and mg as: m t I 0 ms2(t = 7( ) 3‘3 (4.18) 0 (m(t) <9 ms(t))13x3 Hybrid robot motion controller The singularityless robot motion controller can be represented by ’U. = (I — m32)1t3 + msgul (4.19) The above controller involves both continuous and switching controls, as shown 74 in Figure 4.3. It forms a two-layered hierarchical hybrid controller, a continuous layer and a discrete layer. The switching conditions are dependent on the proximity to the singular configurations and the previous controller status. When the robot approaches the singular configurations, the hybrid controller first uses damped least squares to achieve an approximate motion of the end effector. It will then switch into joint level control if the robot reaches the singular configuration. Robot ]——._+ Gravity. Coriolis and Centrifugal terms Task Level Council—— Task Level Control - with DLS Joint bevel Control [— + — ‘— l Singular Conditions ]¢—- Figure 4.3: Hybrid system controller framework For a robot with a spherical wrist, it is worth noting that if the position and orien- tation control are decoupled, joint level control and task level control can be enabled simultaneously. This happens when m7 = 0 and m3 = 1. It means that the position is controlled at task level and the orientation is controlled at joint level. However, the position and orientation are generally coupled if the tool center of the end effector is used to indicate the desired position. For a given task and = {p3, pg, pg, 0", Ad, Td}, the desired position {pi pg, pf} is coupled with the desired orientation {01, Ad, Td}. However, it can be easily seen that after proper transformation of a given task, the desired position at the wrist center can be taken as the desired position at task level. It is only related to the joint angles q1,q2,q3 and can be obtained from rd [67]. The orientation control will only depend on qff, qg, qg. The end effector position and ori- entation can therefore be decoupled and can be controlled separately. Since the wrist singular configuration may happen at almost any end effector position, the separated position control and orientation control can ensure a relatively larger continuous dex- 75 terous workspace for position control. Path planning for switching In the regular region 90 and region 521, the task is represented in Cartesian task space. The task, however, in region {22 needs to be transformed into joint space since a joint level controller is used. Given an" in task space, joint level command qd needs to be calculated in subspace S22. Substantial research has been done on this topic [15, 47, 75, 56, 49]. By the relation between task space and joint space variables, x = (1(9), a first order approximation of the kinematic function at a configuration q can be written as: div = J (q)dq, where do: and dq represent motion increment in task space and joint space respec- tively. At robot singular configurations, rank( J (q)) < n. When a manipulator passes through a singular configuration, a certain degrees of freedom in the singular direction or the degenerate direction is lost. Therefore, there would be no feasible solution for the manipulator to move into the singular direction. For a path close to a singular configuration, an exact solution is possible but very large joint motions are required if the assigned dx has components along directions which become degenerate at the singular configuration. To balance the accuracy and feasibility of the joint space motion, the following criteria is used to find the inverse kinematics: mqinfwllldrr - J(q)alq)||2 + wzllqd - q||2}- 76 where w, and 7112 are positive weight factors. This criteria minimizes the deviation after the task is mapped into joint space, and tries to find qd such that the least joint movement is needed. An optimal solution to this can be written as [47]: dq = w1(w1JT(Q)J(€1)+ wzl—1J(Q)Td33 In the computation of dq, it can be found that the joint space variable can be divided into two subspaces: joint variable that can be computed by normal inverse kinematics and joint variable that cannot[15, 75]. For example, at the wrist singular configura- tion, Figure 4.1(c), only q}; and qg cannot be obtained by the inverse kinematics; they are gal and q‘3’ at the boundary singularity, Figure 4.1 (a); and qf and q; at the interior singularity, Figure 4.1 (b). Thus, the desired joint level command can be grouped into two subsets: q“) denotes the joints that can be obtained by inverse kinematics, (1(2) denotes the joints that cannot be obtained by inverse kinematics. By checking the values of singular conditions 7,,, 74,711,, the elements of q(2) can be determined. To compute these joint level commands (1(2), the following criteria are considered: ngpiwludz — J(q)dq)||2 + w2|lqd — 4112}. ‘1 Motion planning should also be considered in the joint space. In the vicinity of a singular configuration, a pure geometrical path can always be tracked, but a trajectory which includes both geometrical and temporal specifications cannot be tracked. In the other word, the geometrical parameter and the temporal parameter can be considered separately, as discussed in[56, 49]. It is easy to understand that an appropriate do: would result in a large dq in the neighborhood of singular configurations. A new path can be planned to satisfy the velocity and acceleration constraints. The continuity of joint velocities and task level velocities are also considered in the planning. After switching from task level control to joint level, the initial velocity for every joint is 77 the joint velocity prior to switching. If the control switches from joint level control to task level control, the velocity continuity can be ensured by taking :i: = Jr} as the initial velocity in task space. 4.4 Stability Analysis of the Hybrid Motion Controller In Section 4.3, it has been shown that task level controller (4.12) is uniformly ultimate bounded in the workspace except subspace $22. And joint level controller (4.2) is asymptotically stable in the entire workSpace. This section discusses the stability of controller (4.19) when switching between (4.12) and (4.2) is involved. Theorem Suppose controller (4.2) is asymptotically stable in workspace and (4.12) is uniformly ultimate bounded in region 01, there exists a constant 6 > 0, as shown in Figure 4.2 and (4.7), such that the hybrid robot motion controller (4.19) is ultimate bounded in the entire workspace of the robot. <1 Proof The state variables in the error dynamics in joint space and task space are different, due to which the stability analysis of the hybrid motion controller is difficult. The errors in joint space and task space are defined as: The error dynamics of the manipulator in joint space can be written by éql = 8"2 (4 20) eq2 = —KPqeq1 _ KVq€q21 78 or éq = fq(eq). The error dynamics in the task space are 611 = 61:2 (4.21) (312 = (I — k)(;L‘d — Jq) — [C(Kpl-erl + Kvxerg), or 6'2; = Mex)- It is easy to prove that system (4.20) is stable. (4.21) has been proved to be ultimately bounded in 00 U 01. However, if the controller switches between 111 (4.2) and an (4.12), the system will switch between (4.20) and (4.21). Since the state variables in dynamics model (4.20) and (4.21) are different, the relation between the state variables in model (4.20) and (4.21) is derived first. It is easy to prove that :1: = h(q) and i: = Jq are globally Lipschitz continuous in their defined domain, i.e., llexlll = Hill“l - xll = “h(qd) - h((1)|| S Lillq" — all = Lilleqlll ”622“ = Hid - dill = ||J(qd)qd - J(q)éll S fella“ - (1|! = L2||€q2|| where L1 and L2 are constants. In summary, the following relation between joint space error and task space error can be found: llexll S Lolleqll where Lb is a constant. Similar to the above inequality, the following relation can also be obtained based on the Lipschitz continuity of the function q = h‘1(a:),q = J ’14: in 00 U 91. lleqll S Lallexll Therefore, the errors in joint space and task space satisfy the following inequality in 79 DOUQI. lleqll S Lallerll, llerll S Lblleqll (422) Defining the Lyapunov function of system (4.20) and system (4.21) 1 V1 = 5(elepqeql + eggqu) 1 V2 = §[€:1K81-1 + ([1811 + €12)T(/1€1-1 + 612)]. Since the system (4.20) has been proved asymptotically stable in the entire workspace, and (4.21) has been proved to be uniformly ultimate bounded, the Lyapunov functions satisfy the inequalities in region {20 U (21 [35] allleqll S “(6,1) S bllleqll (9V e . gigglme.) s -61lleql. a2||€4|| S V2091) S b2||84~|| 8%(81) Be; fx(er) S —02||€z|| where a1,b1,c1,a2,b2,c2 are positive scalars. For an initial time to, the following inequalities can be obtained. V1(eq(to + 7)) S e‘*"‘%(€q(to)) (4.23) V2(er(to + r)) g e‘A2721/2(e1(t0)) where A, = c, / b1 and A2 = c2/b2 are positive scalars. In order to prove the stability of the hybrid controller (4.19), it is essential to show that V, is monotonely decreasing at all odd number of switch instances, t1, t3, t5,- - -, and V2 is monotonely decreasing 80 at all even switching instances, t0, t2, t4, - - -. VA Figure 4.4: A typical switching sequence of the hybrid motion controller Based on the switching sequence in Figure 4.4, V1(€q(t1)) S b1||64(t1)||SbiLallt‘ix(tl)ll _<_ b:"%(ex(t1)) g %e“*2(tl—‘0)V2(ez(to)) V2(er(t2)) S b2||e,(t2)||sbnglleq(t2)|l (4.24) g fiffmeqam g E’jTII’Be.>-*1“2“”i/1(c.1050) S bi?11%,?““2‘“)e_A2(t‘_t°)V2(€x(to)) Form the above inequalities, it can be seen that b L b La V2(er(t2)) S :5 ; e-MUz-ti)e—A2ft1-to)v2(ex(t0)) 1 2 _ C en(—A,(t2—t1)-A2(tr-to)) e(l—u)(-»\1(t2—tr)—A2(t1—to))V2(ex(to)) (4-25) |/\ e(l—p)(—A1(tg—tj)—A2(t1—to))V2(eI(t0)), for c ejr(-A1(t2—11)-/\2(tl—t0)) S 1 Here _ b2 Lb b, L, (11 (12 C 81 is a constant and 0 < ,u < 1. Based on Multiple Lyapunov Function method [8], [52],[69], the system can be proved to be stable as long as (4.25) holds. It means that c e"("\1(‘2“1)"‘2(‘1“°)) S 1 is a sufficient condition for the stability of the sys- tem during switching. This can be achieved by regulating two sets of parameters, A1 and A2; t2 — t1 and t1 — to. The values of A1 and A2 are related to the gain matrices Km, Kw, Km and qu. In the implementation of the controller, high gains and high sampling frequency are chosen to ensure proper A1 and A2. The values of t2 — t1 and t1 — to are also related to 6. The boundary for switching into and out of a continuous controller is not the same. This forms the dwell region A, and can effectively avoid chattering at switching. From the definition of (4.7), the value of mag cannot be changed immediately. Since t2 — t1 and t1 — to are time intervals when a continuous controller is enabled, they can be regulated by the value of 6. There- fore, the gain matrices and the size of the dwell region A are used to ensure that c e”(‘61(‘2—t1)-62(tl-to)) S 1. The above analysis is valid for He, > C1”, the switching system is uniformly ultimate bounded. In the design of the controllers, the size of the dwell region can be chosen based on two situations. The first one is the ultimate bounded. Considering controller (4.12) is ultimate bounded by (1, therefore “ex” 2 ”6:1” 2 C1 should be ensured by 6. The other one is the the gain matrix and motion planning. It can be proved that the transition of ex and eq are bounded by their initial values during switching. And the bound is related to the gain matrices and the desired trajectory. It can be assumed that the initial value of errors are also bounded. Therefore, e,t and eq are bounded. Thus, 6 can be chosen that eq(t) E (22 for t > T and T is a time instance after finite switches between joint level and task level control. By choosing an appropriate region, the switching sequence can also be completed in one cycle. The same procedure can be applied to time instances t4, t6, for V2. Therefore, V2 is monotonely decreasing at all even switching instances. Similarly, V1 is monotonely 82 decreasing at all odd switching instances t1, t3, t5,- - ~. Thus, the switching system is stable. <1 4.5 Experimental Implementation and Testing The proposed hybrid robot motion controller was implemented on a PUMA 560 robot. Figure 4.5 shows the experimental setup. The controller ran under the real-time op- erating system QNX [1]. The parameters in the experiments are chosen as 6, = 6,, = 015,61, = 025,6 = 004,01, = 008,01), = 0.08, am = 0.03,k,0 = kwo = 02,117“) = 1, w, = 1112 = 1, pr = diag(4800.0,6000.0, 4800.0, 200.0, 200.0, 280.0), Ky, = diag(80.0, 88.1, 88.2, 22.0, 24.0, 25.6), qu = diag(2400.0, 2400.0, 2400.0,11000, 1100.0, 1100.0) and qu = diag(50.2,50.5, 50.1, 360,360,360). All the parameters are de- termined experimentally. The sampling period is set as 1ms. Given a path, the trajectory was computed based on the path planning approach in [26]. A trajectory is them formed by integrating temporal parameters with the given geometrical path considering the velocity and acceleration constraints. In the figures, the unit for posi- tion is meters. For angles, it is radians. The dash line denotes the desired trajectory and the solid line denotes the actual trajectory. QNX PC ServoToGo A ND coder . Controller (1:) Interface Puma 560 User Interface card five“ ‘ Power Supply Figure 4.5: Experimental Setup Figures 4.6 and 4.7 compares the DLS method with the hybrid motion controller when the robot approached its arm singular configurations at a: = 0.9, y = 0.15, z = 83 Figure 4.6: At boundary singular configuration, with DLS method only. Dash line: (a) x position _o_31.. .. -0.32 0 Desired; Solid line: Actual A E v Figure 4.7: At boundary singular configuration, with hybrid motion controller. Dash -0.27 -0.28 —0.31 -0.32 (a) at position V ...................................... 0 2 4 6 t(sec) line: Desired; Solid line: Actual 84 (b) y position 0.16 4 £0.14» > 012. ....... ,, 0.1' ~( 0.08 * ‘ 0 2 4 6 t(sec) (d)rneasurementotrb 0 : - ~( ..................... l —0.5 .1 0 2 4 6 t(sec) (b)ypos‘rtion 0.16 : : £0.14» >~ 0.12’ 01 t. .. ................. ',. ........... 4 0.08 * 0 2 4 6 t(sec) (d) measurement ot rb ........ (exposition 4x103 - (0)yposjtion - - 02 . ....... :5; 0, .......................... E 2 ------- . 0». ....... . 0 .......... ‘ -04) ..... . _02 _2, .............................. _4 - - - 0.4 (0) measurement at ri g 05 77 . 0.49, « 0.48» 1 o. .................... 0.47A ---l _01: .............. . 0.48 -o.2 A A A 5 10 15 20 25 time) Figure 4.8: At interior singular configuration, with DLS only. Solid line: Actual Dash line: Desired; (a) x position x 104 (b) y position ' ' Y ' a 4 ' . T g g 2 .. ............. x 0.1 ..................... , ., > 0 »- 0 ’ -0.1 ................. .. . -2 i ......... ............... . . l _02 ........... . A A A ‘ _4 A A A A 0 5 10 15 20 25 0 5 10 15 20 25 t(sec) t(sec) (c) 2 position 0.51 . - - - 5 0.505 »» . N 0.5: g 0.495 f , 3 0.49 A A -0.2 A A A A ‘A 0 5 10 15 20 25 S 10 15 20 25 1(566) Dash Figure 4.9: At interior singular configuration, with hybrid motion controller. line: Desired; Solid line: Actual 85 0.2 0.1 —0. 1 -0.2 -0.3 -0.4 -0.5 0.535 0.53 . 0.525 0.52 0.515' 0.51 0 x 10" (b) y position v f -5 . 0 10 20 30 t singular conditions. dotedzrl; solidzrw 0.4 A . 0.2 r AAAAA o > . \ x ...... ‘ _0'2 . ........... . \ \. . _ . . \ \ -0.4 f """"" ' ' ' ' 1‘ \ ., —0.6 A ; 10 20 30 Figure 4.10: With both interior and wrist singular configurations. Dash line: Desired; Solid line: Actual (a) x position -0.505 -0.51 . 0 10 20 30 t (c) 2 position 0.3 f . 0.2 . o1 , . .............. o . ........ -0.1 . ----- —0.2> -0.3 . -0.4 . 0 10 20 30 t 0.286 0.284 ' 0.282 r ‘ ‘ 0.28 > 0.278 0 0.8 0.6 0.4 r 0.2 r —0.2 -0.4 -0.6 (b) 1! position 10 20 30 singular condition. rw v Figure 4.11: With wrist singularity. Dash line: Desired; Solid line: Actual 86 —0.297. The task of the robot was to move along x direction from 0.7m to 0.9m. In Figure 4.6, it can be seen that the error in y and z direction became bigger and bigger. In Figure 4.7, the hybrid robot motion controller was used. It can be seen that the tracking errors in y and 2 directions were much smaller comparing with Figure 4.6 and the task was completed. It is worth noting that the path was replanned in joint space after the controller switch into joint level control. The trajectory was planned in joint space such that the joint motion constraints were satisfied. The end effector is slowed down to avoid unreasonable joint velocities. The desired trajectory for the next two experiments was —0.3m to 0.25m in 2: direction so that interior singular configuration would be reached. Figure 4.8 shows the trajectory of the robot under task level controller u3. When the end-effector entered the interior singular configuration at a: = —0.06, y = 0, z = 0.45, a very large deviation from the desired path was observed. Figure 4.9 shows the experimental results under the hybrid robot motion controller. Although the deviation from the desired path is still observable, it was smaller as compared to the results in Figure 4.8. The hybrid controller switches into joint level at t = 143, where 7, = 0.08; and it switches back to task level at t = 193, where 7, = —0.08. Figure 4.10 shows the position tracking result when both interior and wrist singu- larities were reached in the motion. At time t = 888C, both interior and wrist singular configurations were reached, the controller switches into joint level. Errors caused by switching at t = 123cc can be seen from Figure 4.10 (a) (b) and (c). This also shows that the hybrid system was stable when double singular configurations were reached. Figure 4.11 shows that the hybrid motion controller can stably pass the singular configurations with certain speed. In this experiment, :c,y, O,A,T were fixed. The end effector position 2 changed from —0.3m to 0.3m. The average velocity is 0.2m / s. The end-effector passed the wrist singular configuration at about t = 15sec and in the middle of the path at certain speed. Small deviations from the desired path can be 87 is. seen from the results, but the system was stable. This experiment has demonstrated that the end-effector can pass the singular configurations using the hybrid motion controller. 88 CHAPTER 5 CONTROL OF MOBILE MANIPULATORS 5.1 Introduction In Chapter 3 and Chapter 4, control of mobile platforms and singularities of robot arm are discussed. This chapter discusses the control of mobile manipulators, which consists of a mobile platform and a manipulator arm. To take the advantage of large space mobility of the mobile platform and dexterous manipulation of the manipulator, path planning and control of mobile manipulators are discussed. The kinematic and dynamic model of mobile manipulators are derived first. Tracking control and force control are then discussed for holonomic mobile manipulators. A mobile manipulator usually consists of a mobile platform and a robot arm. Figure 5.1 shows the associated coordinate frames of both the platform and the ma- nipulator, which is the top view and side view of figure 1.1. a World frame 2: Inertial frame where robot tasks are generally defined, 0 Moving frame Eb: Frame attached on the mobile platform, 0 Virtual moving frame 2;: Frame attached on the mobile platform, parallel to 2. Since the tasks of a mobile manipulator are generally given in task space, i.e., in the world frame, a unified model of the mobile manipulator in the world frame is preferable for control and path planning. For easy reference, all the definitions of the state variables for mobile manipulators, mobile platform, and the robot arm are listed as follows: 89 o I U Ll ; 4 r I (a) side View Figure 5.1: Mobile manipulators q = {q1, Q2, q3, q4, q5, q6}T is the joint angles of the robot arm, Y, = {2:5, yb, 6b}T is the configuration of the mobile platform in the world frame, q’ = {q1 + 01,,qo,q3,q4,q5,q6}T considers the orientation of mobile platform as an addition of the first joint angle of the robot arm, 8,, = {6b,0, 0,0,0, 0}T is the orientation vector of the mobile platform, P = {‘11, (12, (13, q4, q5, q5, 2:1,, yb, 6),}T is defined as the joint variables of the mobile manipulator, yf, = {235,315, 115,0, 0,0}T is the origin of 2,, and )3; where hb is the height of the mobile robot, ym = {19,,,,.,py,,,,pm,0,,,,A,,,,Tm}T defines the end effector position and orien- tation in frame 2),, yin = {p;,,,, me, p’zm, 0;", Afin, T,’,.,}T defines the end effector position and orien- tation in the virtual moving frame 2;, y = {193, pg, pz, 0, A, T}T defines the end-effector position and orientation in the world frame, which is also the output of the mobile manipulator, f = { f1, fy, f2}T defines force applied on the end effector of the mobile manip- ulator. 90 5.2 Unified System Model 5.2.1 Kinematic model From Figure 5.1, it can be seen that the translation that relates the virtual moving frame 2;, and the moving frame 2,, can be defined by T3 Cb —Sb 0 0 85 Cr, 0 0 0010 0001] where cb and Sb stand for cos 9b and sin 0,, respectively. Since only translation is involved between moving from 2 and Z33, it make it easier to derive the kinematics of the arm in frame 2;, and then transform it to frame 2. The kinematics of robot arms in frame 2,, and the kinematics of mobile platforms in the world frame 2 are used in building the kinematics of mobile manipulators in frame 2. In moving frame 2),, the relation between the end effector and the joint angles of the arm can be defined as 9711 = h(Q) (5'1) where h is a nonlinear transformation. The velocity and acceleration relationship between the task space and joint space can then be derived. gm : Jq 17m = J4+Jq where Jacobian J denotes the relation between joint space and task space velocities. The kinematics of the manipulator in frame 2,, can be transformed into frame 2;, using the transformation T l" The end-effector in the moving frame 2;, can be proved 91 to be yin = h((1’) where h is the same as the transformation in (5.1). In 2’ , the velocity and acceleration relationship between the task space and joint space can then be derived as: 01.. = Jci’ 17:. = J4'+iq' Here J denotes the velocity relation between task space and joint space in frame 2],. Since only a translation is involved between the world frame 2 and frame 2;, the end effector position and orientation in the world frame can be expressed as y = yi + 311..- The velocity and acceleration relation in the world frame 2 can then be derived as: 4'1 = 4:. + f4’ 0 = 173+ Jci’ + ici’ (5-2) = yg+Jrj+Jéb+j¢ Actually, by relating the task space velocity and the joint space velocity of the 92 mobile manipulator in matrix form, the first equation in (5.2) can also be written as 1 0 J11 1 j21 jar ‘7‘“ (5.3) j61 jar OOOOO COCO = Jhp where J), is defined as the Jacobian of the mobile manipulator. It can be seen from J), that the manipulator Jacobian is augmented by a submatrix whose rank is 3(J61 equals 1). The Jacobian can be full rank even when J is singular. Thus the mobile platform increases the capability of manipulation. It is worth noting that the virtual frame 23;, will ease the modelling of the mobile manipulator. The existing models of the robot arm and the mobile platform can be used. 5.2.2 Dynamics Model The dynamics of the robot arm in frame 2,, can generally be described by Mr(¢1)r'1' + 0101.0) + 91(0) = 7'1 (5-4) where r, is the 6 x 1 vector of applied torques, M((q) is the 6 x 6 positive definite manipulator inertia matrix, c1(q, q) is the 6 x l centripetal and coriolis torques, and g,(q) is the 6 X 1 vector of gravity term. Substituting (5.2) into the dynamics (5.4), the robot dynamics in the form of world frame variables can be obtained: CJ'I C}! v Mlj—lai—ii—Jéb—J‘q')+c,+9, =71. ( 93 Defining the output of the mobile manipulator as :1: = {2:1, .132}, where 2:, = y, 2:2 = Y,,, this equation can be rewritten in the form of Mlj‘lii, + N143, — MIJ-lliq' + 01+ 91 = r1 , (56) where N122 combines terms —M1J‘1JO,, and —M1J‘1y'b. Following the work of Liu and Lewis [43],Yamamoto and Yun [84], Morel and Dubowsky [27], the dynamics of a holonomic mobile platform can be represented by N2(P)i1+ M20042 + 02(1),?) = T2 where N2(p) stands for the interaction between the mobile platform and the robot arm, M2(p) is the inertia matrix of the mobile platform, and 72 = {71,7'yfl’o} is the input torques applied on the mobile platform. The dynamic model of mobile manipulators with holonomic mobile platform can then be described by l f Mlj—l N1 ] [ (I31 ]—A[1j-1jQI+C1 +91] 7'1 l- .. + ,= (5.7) N2 M2] (132 C2 7.2 Let 011.j—1 N1] M = N2 Mg] F 7’1 T : T2 —M,J-liq’ + c1 + 91 C ': 1 C? 94 (5.7) can be written in a concise form as Mat + c = r. (5.8) It is worth noting that both the kinematic model and the dynamic model is described in a unified coordinate frame. Since the end effector and mobile platform variables are described in unified frame, it will ease the resolution of kinematic redundancy and the force/ position control. 5.3 Motion Control and Redundancy Resolution 5. 3.1 Tracking Controller Applying the following feedback control to (5.7) MJ-1 N1 ' -—1mJ-1J'q'+c1 +91 7' = -u + (5.1) N2 A12 L C2 the dynamics model of the mobile manipulator can be simplified and linearized as if = u, (5.2) where 7' = {71, 72}T, the dynamic model of the mobile manipulator can be linearized and decoupled as 515 = u, (5.3) where the vector x 2 {p1, pg, 19;, O, A, T, 161,, yb, 61,}T is the mobile manipulator output and u is a linear control vector. Eq.(5.2) forms a unified model for the mobile manip- ulator. It is worth noting that vector :1: not only contains task level variables y, but 95 also part of the joint level variable Yb. Since y is defined in task space, direct control of the end effector position and orientation has been implemented. As a result, the mobile manipulator controller has the advantage of the task level controller. At the same time, the joint level redundancy can also be resolved easily. Given 27" = {193,193,131 0", Ad, Td, :55, yg, 031V as the desired position and orienta- tion of the mobile manipulator, the linear system feedback for model (5.2) can be designed as: u = id + and — sic) + and — 2:). (5-4) It is easy to prove that the above control algorithm (5.4) is asymptotically sta- ble. However, the task of a mobile manipulator is generally given in the form of yd = {pipipi Od,Ad,Td}T. The desired mobile robot position and orientation de = {223,313, 0;}? should be obtained by considering the mobile manipulator config- uration and bandwidth of the mobile platform and the arm. By properly positioning the mobile platform, singular arm configurations can be avoided, i. e., the coordinated motion of the mobile platform and the arm can maximize the capability of manip- ulation. In the following subsection, we give a coordination scheme considering the robot arm manipulation capability. 5. 3.2 Singularity Analysis and Redundancy Resolution Form (5.1), it can be seen that no matter how the linear controller u is designed, the feasibility of the mobile robot controller (5.1) depends on the existence of matrix j‘l. From the singularity analysis of the manipulator, the following singular conditions can 96 be obtained [65]: 7b = ld4C3 — a333| 72‘ = |d4823 + (1262 + a3023| (5.5) 7w = l — 35l Here 7;, = 0,7,- = 0,7,” = 0 stand for certain arm singular configurations. Here d4,a2, a3, a4 are robot arm parameters and 3,,0, stands for sin q.- and cos q,- respec- tively. When the robot arm approaching a singular configuration, the value of at least one of the three expressions will become smaller. If the manipulator is put on a fixed platform, singular configurations can not be avoided for some tasks. However, singular configurations of the arm can be avoided by appropriately positioning the mobile platform for a mobile manipulator. Given a path for the end effector of the mobile manipulator, a path for the mobile platform should be calculated online in such a way that manipulator is always in a dexterous configuration, i.e., not near a singularity. The desired mobile platform path is therefore dependent on the system output. An event based planning approach [79], as shown in Figure 5.2, is used for online coordination control of the mobile platform and the manipulator. The basic idea of event based planning is to introduce a new motion reference vari- able, 3, which is different from time and directly related to the sensory measurement of system output. The motion information of both the arm and the manipulator is passed to the planner by the motion reference 3. For coordinated control of multiple robots, the planner of each robot should take the common motion reference 3 as an input. Thus the robots are coordinated by a common motion reference. The objective of coordinated control can be expressed by maximizing the ability of manipulation, which also means maximize the values of the expressions given by (5.5) and taking advantage of the high bandwidth of the manipulator. However, it is not necessary to keep the capability of manipulation at its maximum value. If the 97 conditions in (5.5) are always maximized, the manipulator will be fixed at a certain configuration and lose its dexterous manipulation capability. As long as the values of yum-,7“, are greater than certain prespecified minimum values, the manipulator will certainly have the ability to move. The desired position and orientation of the platform should be determined by the trajectory of the end effector and the conditions given by (5.5), which are also the system output. Define the motion of the end Controller : ‘ Mobile ‘ Controller Platform Logic Command i . Motion l Reference Figure 5.2: Event-based coordination between the arm and the mobile platform effector as the motion reference, the desired trajectory of the mobile platform is a smooth function of s and 7b, 7,, 7,“. (Kb = 9109,71): 7i77w) yb = gy(s,%,%,7w) (5-6) 6:, = go(s,n,vm’w). Functions gz, g,,, 99 are continuous and piecewise differentiable. Here the motion ref- erence s can represent the motion of both the platform and the manipulator,i.e., a function of the augmented output :10. Let 9 represent 91,, gy,or go and 7 represent 98 7b, 7,- or 7,0, the requirement of the definition of coordination can be represented by 39 . ('37 5—0 lf|7|>7m1m0rg>0 39 . 87 _ < . __ as > 0 1f |7| _ 7mm and ('33 < 0 where 7mm stands for the allowable minimum value of 7. If the value of 7 is greater than 7min or it is increasing with respect to s, the platform can choose to keep the current position. The mobile platform should be commanded to a position such that the value of 7 will increase. An example implementation of g considering only the arm singular configuration 71, is shown here: 323 = an, + (xfls) —- x1)r y? = 316 + (1173(3) - 1‘2)? 0;: = 91, + (51", where (5 = atan2(:rg — 315,931 — mb) and 0, if (|7| > 7mm or g:- > O) and r' = 0 r = a1, if [7| > 7m,-n and r" > O 67 as <0. (12, if Ivl S 7mm and Here r is designed as a hybrid system [65] and it is piecewise continuous from the right, r' and r+ stand for the values of r before and after a time instant, 012 > (11 > 0 are constants. The values of a2 and an > O are obtained adaptively in the experiment. r can take more discrete values in implementation. By this design, the end-effector will go into certain range of 7 no matter what the initial value of 7 is. Using this strategy, coordination can be achieved even when obstacles block one of them. When the end effector is blocked by obstacles but the mobile platform is free to move, 3 stops evolving and maintains a constant value. The desired trajectory of the mobile 99 platform will therefore not change and the whole system is stable. If the mobile platform is blocked but the arm is not, the arm can still continue performing its task without the help of the platform. If now a singular configuration is reached, the desired trajectory of both the arm and the platform will remain constant. Thus the system stability is not affected by obstacles or singularities. 5.4 Experimental Implementation and Results 5.4 .1 Hardware and Software Implementation The motion planning and control approaches for a mobile manipulator have been implemented on a mobile manipulator consisting of a Nomadic XR4000 mobile robot and a Puma560 robot arm, as shown in Figure 1.3. For the mobile manipulator, there are two PCs in the mobile platform, one uses Linux as the operating system and runs the mobile robot control software and the other uses a real time operating system QNX and runs the Puma 560 control software. The two computers are connected via an Ethernet connection and communicate at a frequency of 300—500Hz. The sampling period for the Puma 560 control software is 1ms. The hardware structure is shown in Figure 5.3. The control softwares was devel- oped for the experiments in this chapter and chapter 6. The software components for the experiments are listed as follows and a software structure is shown in Figure 5.4: o Non-time based mobile robot motion control software 0 Singularity-free robot arm motion control software 0 Mobile manipulator motion control and force control software 100 Puma 560: Pentium PC force/torque, acceleration, fl RTOS QNX encoder, actuators, .» ' ~ joystick. ' . ”k Mobile Robot: Laser, Sonar, Infrared, bumper, encoder. actuators Figure 5.3: Hardware structure of the mobile manipulator 5.4.2 Path Tracking and Tele-operation In Figure 5.5 and 5.6, the results of path tracking are presented. A sequence of tasks are given in the form of desired end effector position. In Figure 5.7 and 5.8, the results of tele—operation are recorded. The desired end—effector position is generated by Microsoft Joystick connected to a remote PC and sent to robot controller via wireless Internet. Plots (a) and (b) of figures 5.5-5.8 show the end-effector position in the world frame. The solid curve is the end-effector position in world frame and the dash line is the mobile platform position in the world frame )3. The end—effector position in the moving frame Eb is shown in plots (c) of figures 5.5-5.8. The solid curve is mm and the dash line is ym. The measurement of the capability of manipulation is shown in plots ((1) of figures 5.5-5.8. Its maximum value is 4.4. For a sequence of tasks, as shown in Figure 5.5 and 5.6, the capability of manipulation is kept at certain value, which is larger than 0.4. The trajectory of the mobile platform is given on line according to the measurement of the capability of manipulation and the end-effector output. For tasks not known a priori, as shown in Figure 5.7 and 5.8 (a)(b)(c), the capability of manipulation can adapt to a higher value, as shown in Figure 5.8(d). This phenomena demonstrates that the robot arm is kept in a dexterous workspace during the operation and this kinematic redundancy resolution scheme is suitable for a point-to—point task, a sequence of multiple tasks, and task unknown a priori. 5.4.3 Path Tracking with Unexpected Obstacles To test the path tracking controller and the event based coordination scheme when tasks are not known a priori and unpredictable obstacle is present, the results of teleoperation are presented in Figure 5.9. The desired position of the end effector is generated by a joystick. Figures 5.9 (a) (b) depict the end effector position in :r and y direction of the world frame respectively. Figure 5.9 (c) shows the position of the platform 23;, and yb. The Operation can be divided into four parts by t1 = 15sec, t2 = 44.4sec and t3 = 54.93ec, as shown in the figure. Before t1, the mobile manipulator operates normally. At time t1, the platform meets a unexpected obstacle and can not move any more, as shown in Figure 5.9 (c). The tasks assigned to the mobile manipulator can still be performed by the robot arm without the help of platform. It can be seen that the value of manipulation capability measurement becomes very small around time instance 383cc. At t2, the obstacle is removed, the platform begins to cooperate and the manipulation capability adapts to a higher value. At time instance t3, the platform is blocked again, the robot arm perform the task alone and the mobile manipulator stops evolving after the manipulation capability drops to a very low value. By the event-based coordination scheme and path tracking controller based on the unified model, the mobile manipulator system is stable even when unexpected obstacles occured. capability is especially useful when the mobile manipulator works in a highly unstructured environment. This experiment also shows that the kinematic redundancy is resolved online and the manipulation capability is 102 Optimized during the operation. 103 ‘ I 19% mo m feted ] [Actuate oiled Pot 'm s a I it ‘ COD Cam-n Comm “More 2 Wheel .l (“—‘fi Allow Prowl “MOM” Figure 5.4: Software structure of the mobile manipulator 104 (a) ”b (b) 3WD 0.1 - 1.5 - A 005» 1 A 4 .5. E: o 4 _O_05r 4 4 -0.1 * -O.5 ‘ 20 4O 60 O 20 40 60 f(aec) t(sec) (C) MM" 0 (d) ManipulationAblflty ' 7 5 f V. _ __ W I \ .4’ .. .. ., AOBl—r- \_ ___ -,-__ , 0 . : g _-‘ 0.8» t 0.3» 0.4" ' ~ ‘ 0'2 0 a E : - o A ; 0 20 40 60 0 20 4O 60 New) t(sec) F igure 5.5: Multiple tasks path tracking (a) mm (b) m0 015 - : - . e g 00.5; . . .. ., . 4 0 I V 4 —o.1’ - » ‘ , —0.15> + 412 . - - ., ‘- . - . 0 1O 20 30 ‘0 0 10 20 30 40 f(sec) t(sec) (c) xm,ym (d) Manipulation Ability 1.2 , v . 05 ._ lg ‘ _ r . ._'—_—\.... .: / '. . . V 08. I . J . , . . N ‘ ' 0,6» ' ' . 03 - - . . 0.2, . _ .: . , _.. .. , , 0.1» -0.2 ‘ ‘ ‘ 0 ‘ ‘ * 0 1o 20 30 40 0 10 20 30 40 ((sec) t(sec) Figure 5.6: Multiple tasks path tracking 105 (m) 0.2 60 80 40 1 (sec) (C) NIH-YT" a l (860) ”390) (d) MmipulationAbility 0.5 v v 0.4,, ..... ....... 0.3. 02. .... 0.1.“. 0 i 0 10 20 t (SOC) Figure 5.7: Tele-operation (m) -0.1 _03.. . . -O.4 ’ —O.5 O (m) 0.6’ 0.4' 02* 08* 20 30 t (sec) 0.5 ' ' I I \\ l \ / \ / \ 0 h "‘ I ‘ ‘\ I" . —O.5 ‘ ‘ 10 20 t (sec) ((1) Manipulation Ability 0.5 f f 0.3 * 0.2 ’ 0.1 > o a k 0 10 20 1 (sec) Figure 5.8: Tele-operation 106 (a) 0X60“) Blamed) 0.5 o... -o.5 - A 80 80 0.45 0.4 0.35.. , .. -0.5 t L i . i 4 0.25 1 ; l ' ‘ i 0 2o 40 60 80 o 20 40 60 so Nose) t(aec) Figure 5.9: Teleoperation with unexpected obstacles 107 CHAPTER 6 INTEGRATED SENSING AND CONTROL OF MOBILE MANIPULATOR 6.1 Introduction and Force Control Schemes Revisit Besides trajectory tracking, object manipulation is another kind of task for a mobile manipulator. The mobile manipulator needs to interact with the environment, and output force control is necessary in this case. There are generally two ways for the end effector to interact with the environment. One is interacting with static environments or objects, such as force tracking along hard surface. Force control schemes such as hybrid force/ position control, impedance force control, explicit force control and many others have been proposed in the context of fixed base manipulator. The force control schemes are highly dependent on the environments of the robot. In many cases, the environment is assumed as static and the directions for force control and position control are separated. The end effector can also interact with dynamic environments, such as interacting with moving objects. For some objects that are too heavy or large for the mobile manipulator to carry, pushing or pulling is an alternate choice to manipulate objects. Since the mobile manipulator can provide both motion in large workspace and dexterous manipulation capability, we discuss the force control of the mobile manipulator while pushing an object. This requires the mobile manipulator to provide both output force and motion along the same task direction. In this dissertation, the kinematic redundancy of the mobile manipulator is utilized to decouple the force control loop and motion control loop in the same task direction. To compare with, the force control schemes based on a decoupled nonredundant robot model is revisited first. Hybrid force/ position control was first proposed by Raibert and Craig [55]. The workspace is divided into two orthogonal subspaces as shown in Figure 6.1. A selection 108 matrix S determines the subspaces for which force or position are to be controlled. The control laws for position and force control can be designed independently to satisfy the different control requirements of force and position. Generally, the force control law is designed to interact with static environment. However, the motion of the environment should also be considered when the robot interacts with a moving object. To improve the performance of the force control law, Schutter [61] proposed an approach to feed forward the object motion parameters such as object velocity (1'20 and acceleration Eta, as shown by the dotted lines in Figure 6.1. The desired output force fd along the motion can be tacked. However, it can been seen that the motion control along the same task direction is open loop. 4 Position x + Control Law Linearized r j 5" . 0 ’ Nonredundant Object Robot f d FOIOC f + I - 5 Control Law f. t. lxa I x a Figure 6.1: Hybrid Force/ Position Control For an object or environment, it is assumed that the end effector position and the contact force with the environment along one DOF can not be controlled indepen- dently. The force can then be regulated by controlling the impedance, or compliance of the robot, as shown in Figure 6.2 [29]. The basic idea of this approach is to design a control law which will function in accordance with f = M513 + Bi: + K ac, where the constant matrices M, B, K represent inertia, damping and stiffness matrices of the interactive system respectively. Combining the hybrid control and impedance control, hybrid impedance control was proposed [2]. The environment dynamics was also represented in an impedance form and the interaction between the environment and the manipulator was studied. 109 d Position Linearized _—x' Object Nonredundant Control Law Robot ' 1 Ms2+Ds+K f Figure 6.2: Impedance Control Since the robot may encounter different environments for various applications, control gains of the robot should be tuned in accordance with the environmental characteris- tics. This scheme also has a slow response to force perturbations and the performance of the implicit force control is restricted by the bandwidth of the position controller [74]. 6.2 Output Force / Position Control of Redundant Robots For a nonredundant robot arm, the directions for force control and position control have to be orthogonal[36], as shown in Figures 6.1. Therefore the force and the position can not be controlled along the same task direction. It is caused by the equal number of control input and desired system outputs. For the hybrid force/ position control scheme, the force and position control directions are generally separated by a selection matrix S. For many cases of the impedance control scheme, the control law is essentially a modified position controller. And the performance of the implicit force control is restricted by the bandwidth of the position controller. However, the force control loop and position control loop of a redundant robot can be decoupled along the same task direction. Since the force control is immensely environmental related, the hybrid force/ position control of a redundant robot is discussed from two aspects, the capability of the robot and the constraints the environment. Due to the kinematic redundancy, there are more control inputs than the desired 110 outputs. For the decoupled unified system model (5.3) of the mobile manipulator, the linear control input is a 9 x 1 vector, while the desired end effector position and orientation, {p35, pg, pg, 0“, Ad, Td}T, is only a 6 x 1 vector. The redundant degrees of freedom, which correspond the extra linear control inputs, can be utilized to accom- plish secondary tasks. For a path tracking task, the redundant degree of freedom can be used to position the mobile platform such that singular configurations of the arm are avoided. For a task to interact with the environment, output force of the end ef- fector can be chosen as secondary tasks. For instance, the desired output position and force of the mobile manipulator can be chosen as {ff, 5, f, 0", Ad, Td, pg, pg, 6:}T, where ff, f: and ff are the desired output forces. As shown in Figure 6.3, a selection matrix S is not necessary for the redundant robot. It is worth noting that along a: direction of the world frame 2, both desired position and force, pi and ff, are chosen. p3 and f: are chosen simultaneously in y direction. x4 Position Control Law x Linearized L x x' x- mum can C ’ C ’ C d' Force R°b°‘ f f c Dynamics f + fd . Control Law ' + T- fpf. Figure 6.3: Hybrid Force/ Position Control for Redundant Robots Since system (5.3) is decoupled, it can be divided into two subsystems, position control subsystem and force control subsystem. The state variable space of the po- sition control subsystem, mp, is a subspace of the state space m of system (5.3). Let mp = {p;,,.,p,,,0,A,T,0b}T and denote its corresponding linear control input by up. The force control subspace :cf is chosen as :1: f = {2:5, yb, pz}T and the corresponding linear control input is denoted by Uf. System (5.3) can therefore be rewritten into 111 two subsystems: a: = u p ” (6.1) 5L"; = “f The linear feedbacks for the two subsystems can be designed as _ d . . .. up —kpp($p - mp) 'l' knit”: — mp) ‘l’ 5’3: (6.2) to =5; + kfp(fd — fHkrlfJ(fd(U)-f(0))d0 In the controller (6.2), the force control loop and position control loop along the same task direction are decoupled due to the redundancy of the control inputs. And explicit force/ position control of the end effector can be designed. From equation (6.2), it seems that the desired position and force along the same task direction can be tracked simultaneously. This is only mathematically true. Actu- ally, the environment plays an important role in the force control schemes. The desired output forces f: and f: in (6.2) are directly related to the environment dynamics. For instance, zero force output should be commanded in the free space. For an object, the force and position is related by object dynamics and the friction. The dynamics of the object and friction determine the desired output force and the output position of the end effector. However, force and position do not have to be related to the mass of the object only. For some tasks, the force and position can be planned separately in the same task directions[66]. For instance, while a robot cooperates with humans, the output force of a human, fm, is independent of the robot. This kind of environment can be considered as non-passive environments. In multi—robot coordination tasks, more than one robot can share a task. The robots can be considered working in a non-passive environment. The output force for individual robots can be considered independently from its motion, as long as the composed force satisfies the constraints of the environment dynamics. In brief, the environments will determine the desired 112 W!) output force and position, and force and position can be planned independently for certain applications, specially if the environment is non-passive or dynamic[66]. The cart pushing task is a typical example of the non-passive or dynamic environments since the payload and friction of the cart are unknown. In this case, the advantage of decoupled force and position control lies in that the interacting force with the dy- namic environment can then be regulated explicitly by considering the environmental dynamics, as shown in Figure 6.3. The force does not have to be regulated implicitly as it is done in the implicit force control scheme. This ensures the bandwidth of the force control loop, and ensures that the force planning for manipulating the cart can be tracked. 6.3 Integrated Task Planning and Control Due to the large workspace and dexterous manipulation capability, mobile manip- ulators can be used in a variety of applications. For some tasks like painting and soldering, only the motion planning and control of the mobile manipulator are in- volved. For some complicated tasks, such as pushing a nonholonomic cart, the mobile manipulator maneuvers the object by the output forces. Therefore, the task planning for manipulating a cart involves three issues: the motion and force planning of the cart, the trajectory planning and control of the mobile manipulator, synchronization of the motion planners for the cart and the mobile manipulator. Figure 6.4 shows a flow chart for the integrated task planning and control for pushing a nonholonomic cart. First, for a given task, the geometrical path needs to be determined considering the nonholonomic constraint of the cart. Secondly, a trajectory should be planned based on the geometrical path, and the input force for the cart to track the trajectory should be planned. Thirdly, the trajectory planning and coordination of the mobile manipulator [66] should be considered. Finally, the output force control of the mobile manipulator and task planning should be integrated 113 based on the decoupled system model (5.3). In the cart pushing task, the mobile manipulator and the cart compose a system. Therefore, the trajectory planning of the mobile manipulator and the cart should be synchronized. The event-based approach for the path planning and control of multirobot coordination [82] can be used in integrating the planning and control of the pushing task. A common motion reference 3 is chosen for the trajectory planning of both the mobile manipulator and the cart. In other words, the evolution of the cart trajectory and the mobile manipulator trajectory is based on the same motion reference. It is worth noting that the common motion reference 3 is computed based on sensory information of the mobile manipulator. Tasks Path Planning of the l nonholonomic can ; l Trajectory/Force Planning of the nonholonomic can ‘ l Online trajectory planning of the mobile manipulator . l Decoupled force/position control of mobile manipulator l I Nonholonomic cart J l s Sensory information of the can and mobile manipulator aaualajal uonow Can Configuration Figure 6.4: Integrated Task Planning and Control for the Mobile Manipulation of a Nonholonomic Cart 6.3.1 Motion Planning of the Cart The cart shown in Figure 6.5(b) is a nonholonomic system. The path planning for a nonholonomic system is to find a path connecting specified configurations that also satisfy nonholonomic constraints. The cart model is similar to a nonholonomic mobile robot except that the cart is a passive object. Therefore, many path planning 114 algorithms for a nonholonomic system such as steering using sinusoids [45] and goursat normal form approach [70] can be used for the motion planning of the cart. In this dissertation, the kinematic model is transformed into the chained form and the motion planning is considered as the solution of an optimal control problem. Considering the kinematic model of the cart as :i:C = 121 cos 6,: (6.3) g. = 111 sin Go (6.4) a. = 112 (6.5) :icC sin 6c — g. cos QC = 0, (6.6) where 121 and 112 are the forward velocity and the angular velocity of the cart respec- tively. An optimal trajectory connecting an initial configuration and a final configu- ration can be obtained by minimizing the control input energy J: ft: l||v(t) szt. Given a geometric path, an event-based approach can be used to determine the tra- jectory [82]. (a) Mobile Manipulator (b) The nonholonomic cart Figure 6.5: A mobile manipulator and a nonholonomic cart 115 6.3.2 Force Planning of the Cart The nonholonomic cart shown in figure 6.5(b) is a passive object. Assuming that the force applied on the cart can be decomposed into f1 and f2, the dynamic model of the nonholonomic cart in world frame 23 can be represented by Etc = —sin6C + £00366 me me ya = —— cos 9(,. + fl sin BC (6.7) f mc mC 9°C _ —2 Ic where mayc and 6c are the configuration of the cart, me and Ic are the mass and inertia of the cart. A is a Lagrange multiplier and 06 is the cart orientation. As shown in figure 6.5, The input force applied onto the cart, f1 and f2, corre- sponds the desired output force of the end effector, f:, f:. In other words, the mobile manipulator controls the cart to achieve certain tasks by its output force. Therefore, manipulating the cart requires the motion planning and control of the mobile manip- ulator based on the decoupled model (5.3), and the motion and force planning of the cart based on its dynamic model (6.7). To track a given trajectory, the input forces applied onto the cart, f1 and f2, need also be planned. The input force planning of the cart is equivalent to the output force planning of the mobile manipulator. The control input of the nonholonomic cart is determined by its dynamic model (6.7), the nonholonomic constraint (6.6) and its desired trajectory {3:2, yg, 03}. Due to the lack of a continuous time-invariant stabilizing feedback[9], output stabilization is considered in this dissertation. Choosing a manifold rather that a particular configuration as the desired system output, the system can be input-output linearized. By choosing mayC as the system output, the system can be linearized with respect to the control input f1 and v2. This can be explained by the following 116 derivations. From equation (6.3) and (6.4), it is easy to see that 121 = 236 cos 0C + ye sin 06. Here v1 is actually the forward velocity along x’ direction. Considering the velocity along y’ direction is inc sin 0c — 3). cos 0C 2 0, the following relation can be obtained: a, = ire cos BC + ye sin QC Cg = éc Suppose the desired output of interest is {:rc, ya}, the following input-output relation can be obtained by the derivative of the equations (6.3,6.4): .. 1 . me = —cosBC-f1—vlsm6lc-v2 C 1 y, = ——sinHC-f1+v1cosfic-v2 C Considering f1 and 122 as the control inputs of the system, the input output can be formulated in a matrix form: where { 00866 -v1 sin9c G— l 211 cos 6C where {101,w2}T = G{f1, u2}T. Given a desired path of the cart, 33g, yg, which satisfies 117 the nonholonomic constraint, the controller can be designed as wl = 5155+ kp1(:rg —- (EC) + kd$(;i:f — irc) 2122 = 332' + lady? - ye) + my: - lie) The angular velocity oz can then be obtained by {f1,v2}T = G‘1{w1,w2}T. The physical meaning of this approach is that the control input f1 generates the forward motion and 122 controls the cart orientation such that 1:3 and y3 are tracked. However, control input '02 is an internal state controlled by fry, the tangent force applied onto the cart. The control input f2 can then be computed by the backstepping approach [35] based on the design of 1);. Defining v2 = q3(:rc, ye, 0C) and z = tic—d), then equation (6.5) can be transformed into . - L 2 = —¢ + Tf2 (6.8) The control input f2 can then simply be designed as f2 = [fob — kale. — o» (6.9) It is worth noting that the desired cart configuration {wiyg} is the result of trajectory planning. As shown in figures 6.5 (a) and (b), the output force of the mobile manipulator corresponds to the control input force of the cart. Given the force planning f1 and f2, the desired output force of the mobile manipulator would be: ff = fl * sin 9,: — f2 * cos 60, f: = fl * cos 0c + f2 * sin QC. To integrate the task planning and control, the decoupled force / position controller 6.2 can be used to track certain trajectories. 118 6.3.3 Integration of Sensing, Task Planning and Control To implement the force control of the cart, the actual configuration {336, yc,l9c} and corresponding velocities need to be estimated. Since the cart has no sensors equipped on it, sensors on the mobile manipulators, such as laser range finder, encoder and force/ torque sensor are used to estimate the configuration of the cart. To estimate the orientation of the cart in the moving frame attached on the mobile platform, laser range finder is used. Figure 6.6 shows a configuration of the cart in the moving frame of the mobile platform. The laser sensor provides a polar range map of its environment in the form of (a, r), where a is a angle from [—7r/ 2, 7r/ 2] which is discretized into 360 units. The range sensor provides a ranging resolution of 50mm. Figure 6.7 (a) shows the raw data from the laser sensor. Obviously the measurement of the cart is mixed with the environment surround it. Only a chunk of the data is useful and the rest should be filtered out. A sliding window, w = {011, 02} is defined. The data for 01 ¢ w are discarded. To obtain the size of w, the encoder information of the mobile manipulator should be fused with the laser data. Since the cart is hold by the end effector, the approximate position of l'r, y,., as shown in Figure 6.6, can be obtained. Based on 23,, y, and the covariance of the measurement r, which is known, the sliding window size can be estimated. The filtered data by the sliding window is shown in Figure 6.7 (b). In this applications, the cart position and orientation (me, ya, 66) are the parameters to be estimated. Assuming the relative motion between the cart the robot is slow, the cart position and orientation can be estimated by the standard Extended Kalman Filter(EKF) technique[31]. The cart dynamics are described by vector function: X. = F(Xc, f) +£ (6-10) where f is the control input of the cart, which can be measured by the force torque sen- 119 sor equipped on the end effector of the mobile manipulator. X c = {mm .736, yo, ya, 06, 90}. g is random vector with zero mean and covariance matrix 0:. Considering the output vector as Z (t) = {a, r}T, as shown in Figure 6.6, the output function is described as: Z(t) = out.) + c (6.11) where C is a random vector with zero mean and covariance matrix 0,. It is worth noting that nonholonomic constraint should be considered in (6.10) and the measure- ment ((1, r) should be transformed into the moving frame. By applying the standard EKF technique to (6.10) and (6.11), an estimate of the cart state variables Xe for XC can be obtained. The dotted line in Figure 6.7 (b) shows an estimation of the cart configuration in the mobile frame Eb. It is worthy noting that both the force/ torque sensor information and the laser range finder information are used in the estimation of the cart configuration. Figure 6.6: Configuration of the cart in the moving frame To interact with a cart with different weight and length, the cart parameters, me, IC and cart length L can also be estimated based on the force/ torque sensor and accelerometer. Details please see [64]. 120 x 10 (8) Raw data 2 (rad) (mm) 640 620 600 580 560 540 520 500 i L L 1 L 1 ; -o.5 o4 -o.3 -o.2 -o.1 o 0.1 0.2 0.3(rad) Figure 6.7: Laser range data 6.4 Experimental Implementation and Results The path planning and control approaches have been implemented on a mobile ma- nipulator consisting of a Nomadic XR4000 mobile robot and a Puma560 robot arm, as shown in Figure 1.3. For the nonholonomic cart, the mass is 45kg and the cart length is 0.89m. The gripper of the mobile manipulator held the handle of the cart during the manipulation. 121 6.4.1 Pushing Along a Straight Line (a) desired and actual forces (b) tome error in y direction Y .' 7 6 ' ' Y 0 5 1 0 1 5 20 0 5 1O 15 20 {(806) ttaec) (c) cart trajectory (d) cartdtrectlon g 0, .. . .. ,L. ’ o 5 10 15 20 o 5 10 15 20 ((866) «800) Figure 6.8: Pushing the cart along a straight line Figure 6.8 shows the results of pushing a cart along a straight line using the mobile manipulator. In this experiment, the task was to push the cart along y direction in frame 2 from 1.675m to 2.675m. Figure 6.8 (a) shows the desired output force and actual output force in y direction. Figure 6.8 (b) recorded the force tracking error. It is worth noting that the static friction force was also overcome in the first two seconds of the task. Figures 6.8 (c)(d) show the trajectory of the cart and the cart orientation respectively. The position and the orientation of the cart were estimated by fusing the sensor information on the mobile manipulator. The laser range sensor, encoders of the mobile platform and the robot arm are used in the estimation. In this experiments, the motion in y direction was chosen as the motion reference to synchronize the motion planning of the cart and the mobile manipulator. This experiments demonstrated that the desired force, the desired trajectory of the cart has been tracked with satisfactory precision. For the same task, figure 6.9 shows the experimental results when an unexpected 122 (a) desired and actual tomes (b) loroe error in y direction j, * T '. 6 . f actual o 5 10 15 20 25 o 5 10 ’15 20 “006) K800) (o) cantralecwrv . . (meandreetlon 0 5 10 15 20 25 ttsec) Figure 6.9: Pushing with unexpected obstacle obstacle blocked the mobile manipulator along the straight line. At about 7.53, the system is blocked by an obstacle, the motion reference stops evolving. As a result, the desired output force was set to zero and the cart stopped moving. When the obstacle was removed at about 143, the motion reference started evolving again, the desired output force was computed based on the desired motion of the cart and the pushing task is resumed. It can be seen that the motion of the cart and mobile manipulation system was synchronized by a common motion reference. 6.4.2 Turning the Cart at a Corner Pushing the cart along a straight line is relatively easy since the desired end effec- tor position and output force is easy to plan. The second experiment considered a complex task. The mobile manipulator first pushed the cart forward 0.4m along y in about 20 seconds, made a turn in about 35 seconds, and then pushed the cart forward again for 0.4m along a: direction. Figure 6.10(a) shows the trajectories of the cart, :80 and ye, and the end effector trajectory, p1. and py. Figure 6.10(b) is the cart 123 (a) trlectories (b) cart direction 35 . . 2 . . A 3 A E, > 2.5 2 . 1 ’ A A L _05 L L L l —1 -o.5 0 0.5 1 1.5 O 20 40 50 80 X0") «800) (c)xdirectlootorce (d)ydirectlonloree 8 Y ' r r 5 ' '. Y 6 ., 4 ... _8 L - g L O 20 40 80 80 “590) Figure 6.10: Turning the cart at an corner orientation 0,; with respect to time. The cart started from a configuration parallel to y direction, and turned to a configuration parallel to a: direction. The output force is planned based on the desired trajectory of the cart. Figure 6.10 (c)(d) are the force applied onto the cart, f1 and fy. It is worth noting that the force are recorded in world frame 2. It is seen that the force fr pushed the cart along a: direction in the last 20 seconds, and fy pushed the cart along y direction in the first 20 seconds. This experiment has demonstrated that a complex task can be completed by the integrated task planning and control approach. 6.4.3 Pushing Along a Sine Wave The task in this experiment was to push the cart forward along a sine wave denoted by are = 0.2 sin(1.8yc)m. In this experiment, the force f1 generates the motion along the since wave, the force f2 regulates the cart orientation. Figure 6.11 (a) shows the desired and actual trajectory of the cart, and the desired and actual of the end effector. The noise of the actual trajectory was caused by the noisy estimation of cart 124 (a) desired and actual trajectory (b) cart direction ‘ . . f 0.4 +4 V 3 l ; . .J . o .. . . . . _0_2 . . . . . J g -o 2 ~ 4.4 4 1 2 3 4 5 1 2 3 4 5 vii") vim) (c)loroeinxdrerfion (dlloroehydrecuon .' Y 4 f r o 50 160 150 um) Figure 6.11: Pushing the task along a sine wave orientation, which is shown in Figures 6.11 (b). Figure 6.11 (c)(d) show the desired and actual force signal in :r and y direction respectively. f1 and f2 were transformed to frame 2 and compared with the actual force signal in the same frame. It is seen the desired force in both directions were tracked. At t = 258 or y6 = 1.9m, the cart slipped on the floor and the cart orientation was changed, the mobile manipulator still can manipulate the cart such that the desired trajectory was tracked. In this experiments, the motion of the cart in y direction, ye, was chosen as the motion reference. The experiments results has shown that the integrated task planning and control approach enables the mobile manipulator to perform complicated tasks with satisfactory performance. In all the experiments, both the desired cart trajectory and desired output force were tracked. 125 CHAPTER 7 CONCLUSIONS The major results of this work can be concluded in four parts, the motion reference project method and its application in mobile robots, robot arm singularity analysis and its application to singularityless tracking control and mobile manipulator coordi- nation, the unified model of mobile manipulators and an online redundancy resolution scheme, force / positon control along the same task direction and its application to ma- nipulation of moving objects. First, a new design method for non-time based tracking controller has been de- veloped. It converts a controller designed using traditional time-based approach to a new non-time based controller. This approach significantly simplifies the design procedure to achieve a non-time based tracking controller. The results provides an efficient and systematic approach to design non-time based tracking controller for general nonlinear dynamical systems. It is an important step in the development of a theoretical foundation for sensor-referenced intelligent control. The non-time based approach is also a key step in design the coordination of the mobile platform and the manipulator arm. In Chapter 3, this method is applied to tracking controllers of nonholonomic mobile platforms. The experimental results have clearly demonstrated the advantage of the non-time based tracking controller in the presence of unexpected obstacles. The singularity of manipulators and mobile manipulators are studied. For the ma- nipulator arm, a singularity—free robot motion controller which can work at or near singular configurations is presented. Furthermore, the stability of the hybrid con- troller has been theoretically proven. The primary contribution of the hybrid system approach is that the motion controller can satisfy different constraints in different workspace of the robot. The continuous dynamics have different state variable and the supervisory control is represented in Max-Plus algebra. The stability of such a 126 switching control system is analyzed by Multiple Lyapunov functions. For the mobile manipulator, the kinematic redundancy is used to avoid singular arm configurations, the redundancy resolution is therefore solved by the coordination scheme of the mobile manipulator. A unified dynamic model for the mobile manipulator is derived and nonlinear feedback is applied to linearize and decouple this model. The kinematic redundancy and dynamic properties of the platform and the robot arm have been considered in the unified model. Based on this model, mobile manipulator tasks such as path tracking control, force and position control can be easily designed and kinematic redundancy can be used to accomplished secondary tasks, such as controlling force and position along the same direction. An event-based coordination scheme is proposed for the cooperation of the platform and the robot arm. This online approach is suitable for a point-to-point task, tale-operation and a sequence of tasks. The mobile manipulator system is stable even in the case of appearance of unexpected obstacles and the manipulation capability can adapt to a best possible configuration during the operation. Based on the decoupled and linearized model, this dissertation further discusses the force / position control along the same task direction utilizing the kinematic redun- dancy of mobile manipulators. For tasks that require both force and position control along the same task direction, such as pushing a nonholonomic cart, an integrated task planning and control for mobile manipulators is proposed. The approach for ma- nipulating the nonholonomic cart integrates the planning and control of the mobile manipulator, the motion planning and force planning of the nonholonomic cart, and the estimation of the cart configuration based on the sensors of the mobile manipu- lators. The integrated task planning and control enables the robot to interact with moving objects and complete complicated tasks by the output force. The approaches discussed in this dissertation can be used for the motion planning and control of mo— 127 bile manipulators which can find applications in manufacturing, hospitals, homes and offices. Experimental results have demonstrated the advantage of the integrated task planning and control approach. 128 l1] [2] [3] [4] [5] [6] [7] l8] [9] BIBLIOGRAPHY Manuals of QNX Operating System. QNX Software Systems Ltd. R. J. Anderson and M. W. Spong. Hybrid impedance control of robotic manip- ulators. IEEE Journal on Robotics and Automation, 4(5):549—556, 1988. G. Antonelli, N. Sarkar, and S. Chiaverini. External force control for underwater vehicle manipulator systems. In Proceedings of IEEE Conference on Decision and Control, pages 2975 —2980, Phoenix, Arizona, Dec 1999. F. L. Baccelli, G. Cohen, G. J. Olsder, and J. Quadrat. Synchronization and Linearity, an Algebra for Discrete Event Systems. John Wiley & Sons, 1992. N. S. Bedrossian. Classification of singular configurations for redundant ma- nipulators. In Proceedings of IEEE International Conference on Robotics and Automation, pages 818 —823, 1990. A. K. Bejczy and S. Lee. Robot arm dynamic model reduction for control. In Proceedings of IEEE Conference on Decision and Control, pages 1466-1476, San Antonio, Texas, 1983. B. E. Bishop and M. W. Spong. Control of redundant manipulators using logic- based switching. In Proceedings of IEEE Conference on Decision and Control, pages 1488—1493, Tampa, Florida, 1998. M. S. Branicky. Multiple lyapunov functions and other analysis tools for switched and hybrid systems. IEEE Transactions on Automatic Control, 43(4):475~—482, 1998. R. W. Brockett. Asymptotic stability and feedback stabilization. Differential Geometric Control Theory by Brockett, R. W., Millman, R. S., and Sussmann, J. J., 27:181—208, 1983. 129 [10] [11] [12] [13] [14] [15] [16] [17] L. G. Bushnell, D. M. Tibury, and S. S. Sastry. Steering three-input nonholo- nomic systems: The fire truck example. The International Journal of Robotics Research, 14(4):366—381, 1995. F. Caccavale, S. Chiaverini, and B. Siciliano. Second-order kinematic control of robot manipulators with jacobian damped least-squares inverse: Theory and experiments. 2(3):188—194, 1997. P. H. Chang, K. C. Park, and S. Lee. An extension to operational space for kinematically redundant manipulators: Kinematics and dynamics. IEEE Trans- actions on Robotics and Automation, 16(5):592—596, 2000. M. W. Chen and A. M. S. Zalaza. Dynamic modeling and genetic based trajectory generator for non-holonomic mobile manipulators. Control Engineering Practice, 5(1):39—48, 1997. F. T. Cheng, J. S. Chen, and F. C. Kung. Study and resolution of singularities for a 7 dof redundant manipulators. IEEE Transactions on Industrial Electronics, 45(3):469—480, 1998. F. T. Cheng, T. L. Hour, Y. Y. Sun, and T. H. Chen. Study and resolution of singularities for a 6 dof puma manipulators. IEEE Transactions on Systems, Man and Cybernetics B: Cybernetics, 27(2):332-343, 1997. E. Cheung and V. Lumelsky. Motion planning for a whole-sensitivity robot arm manipulator. In Proceedings of IEEE International Conference on Robotics and Automation, pages 344—348, Cincinnati, Ohio, May 1990. S. Chiaverini, B. Siciliano, and O. Egeland. Review of the damped least-squares inverse kinematics with experiments on an industrial robot manipulator. IEEE Trans. Control System Technology, 2(2):123——134, 1994. 130 [18] J. H. Chung and S. A. Velinsky. Interaction control of a redundant mobile manipulator. The International Journal of Robotics Research, 17(12):1302—1309, 1998. [19] J. H. Chung and S. A. Velinsky. Robust interaction control of a mobile manip- ulator - dynamic model based coordination. Journal of Intelligent and Robotic Systems, 26, 1999. [20] R. Colbaugh, M. Trabatti, and K. Glass. Redundant nonholonomic mechanical systems: Characterization and control. Robotica, 17:203—217, 1999. [21] O. G. DeLuca, A. and C. Samson. Feedback control of a nonholonimic car-like robot. [22] R. M. Desantis. Path-tracking for a tracker-trailer-like robot. The International Journal of Robotics Research, 13(6):533—543, 1994. [23] I. Duleba. Channel algorithm of transversal passing through singularities for non- redundant robot manipulators. In Proceedings of IEEE International Conference on Robotics and Automation, pages 1302—1307, San Francisco, CA, April 2000. [24] M. Egerstedt and X. Hu. Coordinated trajectory following for mobile manipula- tion. In Proceedings of IEEE International Conference on Robotics and Automa- tion, pages 3479—3484, San Francisco, CA, April 2000. [25] R. E. Fenton, G. C. Melocik, and K. W. Olson. On the steering of automated vehicles: Theory and experiments. IEEE Transactions on Automatic Control, 21(3):306—314, 1976. [26] K. S. Fu, R. C. Gonzalez, and C. S. G. Lee. Robotics: Control, Sensing Vision, and Intelligence. McGraw-Hill Book Company, 1987. 131 [27] S. D. G. Morel. The precise control of manipulators with joint friction: A base force/torque sensor method. In Proceedings of IEEE International Conference on Robotics and Automation, pages 360—365, Minneapolis, MN, April 1996. [28] A. Hemami, M. G. Mehrabi, and R. M. H. Cheng. A synthesis of an optimal control law for path tracking in mobile robots. 8(2):383—387, 1992. [29] N. Hogan. Impedance control - an approach to manipulation 1. theory, 2. im- plementation, 3. application. Transactions of the ASME: Journal of Dynamic Systems, Measurement and Control, 107(1):1-7, 1985. [30] A. Isidori. Nonlinear control systems. Springer, Berlin, New York, 1995. [31] M. Kam, X. Zhu, and P. Kalata. Sensor fusion for mobile robot navigation. Proceedings of the IEEE, 85(1):108—119, 1997. [32] I. Kaminer, A. Pascoal, E. Hallberg, and C. Silvestre. Trajectory tracking for autonomous vehicles: an integrated approach to guidance and control. [33] Y. Kanayama, Y. Kimura, F. Miyazaki, and T. Noguchi. A stable tracking control method for an autonomous mobile robot. In Proceedings of IEEE International Conference on Robotics and Automation, pages 384-389, Cincinnati, OH, May 1990. [34] K. Kang, A. Sparks, and S. Banda. Coordinated control of multisatellite systems. AIAA J. Guidance, Control, and Dynamics, 24(2):360—368, 2001. [35] H. K. Khalil. Nonlinear Systems. Prentice Hall, Inc, 1996. [36] O. Khatib. A unified approach to motion and force control of robot manipulators: The operational space formulation. IEEE Journal on Robotics and Automation, 3(1):43—53, 1987. 132 [37] [38] [39] [40] [41] [42] [43] [44] [45] O. Khatib. Mobile manipulation: The robotic assistant. 26, 1999. J. Kieffer. Manipulator inverse kinematics for untimed end-effector trajectories with ordinary singularities. The International Journal of Robotics Research, 11(3):225—237, 1992. J. Kieffer. Differential analysis of bifurcations and isolated singularities for robots and mechanisms. IEEE Transactions on Robotics and Automation, 10(1):1—10, 1994. D. H. Kim and J. H. Oh. Experiments of backward tracking control for trailer system. In Proceedings of IEEE International Conference on Robotics and Au- tomation, pages 19—22, Detroit, MI, May 1998. M. Kiréanski, N. Kiréanshi, D. Lekovic’, and M. Vukobratovié. An experimental study of resolved acceleration control of robots at singularities: Damped least- squares approach. Transactions of the ASME: Journal of Dynamic Systems, Measurement and Control, 119:97—101, 1997. J. K. Lee and H. S. Cho. Mobile manipulator motion planning for multiple tasks using global optimization approach. Journal of Intelligent and Robotic Systems, 18:169—190, 1997. K. Liu and F. L. Lewis. Decentralized continuous robust controller for mobile robots. In Proceedings of IEEE International Conference on Robotics and Au- tomation, pages 1822—1827, Atlanta, GA, May 1990. M. Mason. Compliance and force control for computer controlled manipulators. 11(6):418—432, 1981. R. M. Murray and S. S. Sastry. Nonholonomic motion planning: Steering using sinusoids. IEEE Transactions on Automatic Control, 38(5):700—716, 1993. 133 [46] K. T. R. Muszynski. Singular inverse kinematic problem for robotic manipula- l4 7] [48] [49] [50] tors: A normal form approach. IEEE Transactions on Robotics and Automation, 14(1):93—104, 1998. Y. Nakamura and H. Hanafusa. Inverse kinematic solutions with singularity robustness for robot manipulator control. Transactions of the ASME: Journal of Dynamic Systems, Measurement and Control, 108:163—171, 1986. D. N. Nenchev. Ttacking manipulator trajectories with ordinary singularities: a null space—based approach. The International Journal of Robotics Research, 14(4):399—404, 1995. D. N. Nenchev, Y. Tsumaki, and M. Uchiyama. Singularity-consistent param- eterization of robot motion and control. The International Journal of Robotics Research, 19(2):159—182, 2000. K. O’Neil and Y.-C. Chen. Instability of pseudoinverse acceleration control of redundant mechanisms. In Proceedings of IEEE International Conference on . Robotics and Automation, pages 2575—2582, San Francisco, CA, April 2000. [51] [53] F. Papoulias. Stability considerations of guidance and control laws for an- tonomous underwater vehicles in the horizontal plane. In Proceedings of 7th International Symposium on Unmanned Untethered Vehicle Technology, pages 140—158, Durham, NH, 1991. P. Peleties and R. Decarlo. Asymptotic stability of m-switched systems using lyapunov-like functions. In Proceeding of American Control Conference, pages 1679—1683, 1991. C. Perrier, L. Cellier, and P. Dauchez. Experimental comparison of hybrid and external control structures for a mobile manipulator. In Proceedings of IEEE 134 [54] [55] [56] [57] [58] [59] [60] [61] International Conference on Robotics and Automation, pages 3577—3582, Al- buqerque, New Mexico, April 1997. F. G. Pin, J .—C. Culioli, and D. B. Reister. Using minimax approaches to plan op- timal task commutation configuration for combined mobile platform-manipulator systems. IEEE Transactions on Robotics and Automation, 10(1):44—54, 1994. M. H. Raibert and J. J. Craig. Hybrid position/ force control of manipulators. Journal of Dynamic Systems and Control, 102, 1981. M. Sampei and K. Furuata. Robot control in the neighbourhood of singular points. IEEE Journal on Robotics and Automation, 4(3):303—309, 1988. M. Sampei, T. Tamura, T. Itoh, and M. Nakamichi. Path tracking control of trailer-like mobile robot. In IEEE/RSJ International Conference On Intelligent Robots And Systems, pages 193-198, Osaka, Japan, 1991. M. Sampei, T. Tamura, T. Kobayashi, and N. Shibui. Arbitrary path track- ing control of articulated vehicles using nonlinear control theory. IEEE Trans. Control System Technology, 3(1):125—131, 1995. C. Samson. Path following and time-varying feedback stabilization of a wheeled mobile robot. In Proceedings International Conference I CARCV’92, Singapore, 1992. C. Samson. Control of chained systems application to path following and time- varying point-stabilization of mobile robots. IEEE Transactions on Automatic Control, 40(1):64—77, 1995. J. D. Schutter. Improved force control laws for advanced tacking application. In Proceedings of IEEE International Conference on Robotics and Automation, pages 1497 —1502, 1988. 135 [62] H. Seraji. A unified approach to motion control of mobile manipulators. The International Journal of Robotics Research, 17(2):107—118, 1998. [63] D. H. Shin, S. Singh, and J. J. Lee. Explicit path tracking by autonomous vehicles. [64] Y. Sun, J. Tan, and N. Xi. Interactive model identification for nonholonomic cart pushing by a mobile manipulator. In submitted to IEEE ICRA2002, Washington, DC. [65] J. Tan and N. Xi. Hybrid system design for singularityless task level robot controllers. In Proceedings of IEEE International Conference on Robotics and Automation, pages 3007—3012, San Francisco, CA, April 2000. [66] J. Tan and N. Xi. Unified model approach for planning and control of mobile manipulators. In Proceedings of IEEE International Conference on Robotics and Automation, pages 3145-3152, Seoul, Korea, May 2001. [67] T. J. Tarn, A. K. Bejczy, G. T. Marth, and A. K. Ramadorai. Kinematic Char- acterization of the PUMA 560 Manipulators. Washington University, 1991. [68] T. J. Tarn, A. K. Bejczy, G. T. Marth, and A. K. Ramadorai. Performance comparison of four manipulator servo schemes. IEEE Control Systems Magazine, 13(1y22—29,1993. [69] T. J. Tarn, Y. Wu, N. Xi, and A. Isidori. Force regulation and contact transition control. IEEE Control Systems Magazine, 16(1):32-40, 1996. [70] D. Tibury, R. M. Murray, and S. S. Sastry. Trajectory generation for the n-trailer problem using goursat normal form. IEEE Transactions on Automatic Control, 40(5y802—819,1995. 136 [71] J. Tierno. Control of low earth orbit satellite clusters. In Proceedings of IEEE Conference on Decision and Control, Sydney, Australia, 2000. [72] Y. Tsumaki, S. Kotera, D. N. Nenchev, and M. Uchiyama. Singularity-consistent inverse kinematics of a 6 d.o.f. manipulator with a non-spherical wrist. In Pro- ceedings of IEEE International Conference on Robotics and Automation, pages 2980— 2985, Albuquerque, New Mexico, April 1997. [73] Y. Umeda and D. N akamura. Hybrid position / force control of a mobile manipula- tor based on cooperative task sharing,. In Proceedings of the IEEE International Symposium on Industrial Electronics, pages 139—144, Bled, Slovenia, 1999. [74] M. Vukobratovié. How to control robots interacting with dynamic environment. Journal of Intelligent and Robotic Systems, 19(2):119—152, 1997. [75] K. Wedeward, R. Colbaugh, and A. Engelmann. Singularity robustness: Methods for joint-space and task-space control. In Proceedings of IEEE International Conference on Control Applications, pages 22-27, Hartford,CT, October 1997. [76] Y. Wu. Force Regulation and Contact Transition Control. Dissertation of Wash- ington University, 1996. [77] N. Xi. Event-Based Planning and Control for Robotic Systems. Dissertation of Washington University, 1993. [78] N. Xi and T. Tarn. Integrated task scheduling and action planning/ control for robotic systems based on a max-plus algebra model. In IEEE/RSJ International Conference On Intelligent Robots And Systems, Grenoble, France, September 1997. [79] N. Xi and T. J. Tarn. Integration of heterogeneity for human-friendly robotic operations. Journal of Intelligent and Robotic Systems, 25:281—293, 1999. 137 [80] [81] [82] [83] [84] [85] N. Xi, T. J. Tarn, and A. K. Bejczy. Event-based planning and control for multi-robot coordination. In Proceedings of IEEE International Conference on Robotics and Automation, pages 251—258, Atlanta, Georgia, May 1993. N. Xi, T. J. Tarn, and A. K. Bejczy. Intelligent planning and control for robot arms. pages 18—23, Sydney, Australia, July 1993. N. Xi, T. J. Tarn, and A. K. Bejczy. Intelligent planning and control for multi- robot coordination: An event-based approach. IEEE Transactions on Robotics and Automation, 12(3):439—452, 1996. N. Xi, T. J. Tarn, C. Guo, and A. K. Bejczy. Fusion of human and machine intel- ligence for telerobotic systems. In Proceedings of IEEE International Conference on Robotics and Automation, Nagoya, Japan, May 1995. Y. Yamamoto and X. Yun. Coordinating locomotion and manipulation of a mobile manipulator. IEEE Transactions on Automatic Control, 39(6):1326—1332, 1994. M. Zhao, N. Ansari, and E. S. H. Hou. Mobile manipulator path planning by a genetic algorithm. In IEEE/RSJ International Conference On Intelligent Robots And Systems, pages 681—688, Raleigh, NC, 1992. 138 ' l1713mm]Wan '