.LIBRARY y M'Chigan State University PLACE IN RETURN Box to remove this checkout from your record. To AVOID FINES return on or before date due. MAY BE RECALLED with earlier due date if requested. DATE DUE DATE DUE DATE DUE 6/01 c'JCIRCJDaIeDuo.p65-p. 1 5 SUPERMEDIA ENHANCED INTERNET-BASED REAL-TIME TELEROBOTIC OPERATIONS BY IMAD HANNA ELHAJJ A Dissertation Submitted to Michigan State University in partial fulfillment of the requirements for the degree of DOCTORATE OF PHILOSOPHY DEPARTMENT OF ELECTRICAL AND COMPUTER ENGINEERING 2002 ABSTRACT Supermedia Enhanced Internet-Based Real-Time Telerobotic Operations By Imad Hanna Elhajj HE Internet has added a new dimension to the human life. Once the extent T of financial possibilities was revealed, Internet-based applications/ services ex- panded to cover every aspect of our lives. A person can virtually trade, learn, play, control and do many other actions over the Internet. However, all Internet-based activities are still referred to as virtual since the experience is still far from being genuine. The main difference between the virtual and real world is the quality of the human experience. This experience is impeded when the Internet is used because of limited human sensory interaction with the remote object and environment. This research carried out is an attempt to bridge the gap between the real and virtual worlds. This bridge will create the link that human senses need to be remotely coupled with the environment. This research might be the closest technology will get to achieving teleportation in the near future. The basic idea is to feedback sensory information available from the remote machine to the human operator in real-time. This sensory information can relate to any of the human senses: tactile, heat, visual and auditory. However, coupling the human with the remote environment via the Internet has many challenges. These challenges relate to human-machine interfacing, remote ma— chine stability, operating efficiency and safety. What creates these challenges is the Internet delay characteristics. This delay experienced over the Internet is random and nondeterministic with no simple stochastic model. Other difficulties include the existence of a human in the control loop and the non-determinism in the environment or the path. This research presents a general method for the modeling, control and analysis of real-time bilateral teleoperation systems via the Internet. Petri Nets are suggested as a modeling, design and analysis tool, and event-based control theory is modified for the use in the planning and control of bilateral teleoperation systems. Research shows that this method results in a stable system in spite of the existence of nondeterministic time delay. Moreover, the concepts of event-transparency and event-synchronization for event-based telerobotic control systems are developed and analyzed. The implica- tions of these concepts on the system performance are studied. Furthermore, event- transparency and time-transparency are compared. Also presented is the design of event-based control teleoperation systems that satisfy stability, event-transparency and event-synchronization under random time delay conditions. This design method can be applied to a wide range of systems with different environments, human oper- ators, control commands and supermedia (sensory information: video, audio, haptic, temperature and others) feedback. In addition, the concept of tele-coordination is de- veloped and its implications on the performance of multi-robot coordination systems is illustrated. Several different systems were developed and all the experiments confirmed the theory presented. Supermedia enhanced real-time Internet-based robotic operations were carried out successfully in a stable, event-transparent and event-synchronized fashion. Copyright by IMAD HANNA ELHAJ J 2002 To my great and loving Family...Benedicat vos Deus. P: -v \ C Ed t Q'it ‘.’ TM 1 d 7.. r...— l.‘ ‘ Tan f1. ACKNOWLEDGEMENTS I would like to acknowledge all the invaluable help from Dr. Ning Xi. Without whom this would not have been possible. His mentoring along the years have been enlightening. This research is a result of numerous discussions with Dr. Xi, his keen insight knowledge and his support. I would like to thank all my committee members: Dr. Hassan Khalil, Dr. Fathi Salam, Dr. Matt Mutka and Dr. Hairong Li. In addition to their invaluable time spent on the committee, their insightful comments and suggestions enhanced the technical soundness of this research. I also would like to thank Dr. Yun Hui Liu, Dr. Wen J. Li and Dr. Wai Keung Ehng from the Chinese University of Hong Kong and Dr. Toshio Fukuda from Nagoya University for their assistance in the many experiments we did over the years. My apologies for keeping you up so many times till the early morning hours. I am grateful to my friends and colleagues from the Robotics and Automation Lab- oratory at Michigan State University. Especially, I would like to thank Dr. Jindong Tan for his assistance in the implementation and testing of several of the systems. And for the many technically stimulating conversations. Also I would like to thank Mr. Amit Goradia, Miss Meng-Meng Yu and Mr. BooHeon Song for their great help in implementing many of the systems. I am thankful for The National Science Foundation (NSF) and The Defense Ad- vanced Research Projects Agency (DARPA) for providing the financial support for this research. Last but not least, my thanks go to my family: my grandparents, my parents, my brother and my sister. Without their encouragement, support, understanding, sacrifices and love none of this would have been possible. vi Er. on 3.1 3.3 3.4 MC, TEXI 4.1 TABLE OF CONTENTS 1 INTRODUCTION 1 1.1 Applications and Motivations ...................... 3 1.2 Supermedia Enhancement ........................ 5 1.3 Challenges and Difficulties ........................ 7 1.4 Overview of Existing Methods and Approaches ............. 8 1.4.1 Robotics Approach ........................ 9 1.4.2 Communication Approach .................... 10 1.5 Limitations of Existing Solutions .................... 11 1.6 Previous Internet Teleoperation and Their Limitations ........ 12 1.7 Outline ................................... 15 2 THE INTERNET: REAL—TIME CONTROL MEDIA 17 2.1 Characteristics of Internet Communication ............... 17 2.2 Effects of Random Time Delay on Real-Time Control ......... 20 3 EVENT-BASED PLANNING AND CONTROL FOR REAL-TIME TELE— OPERATION 22 3.1 Event-Based Real-Time Planning and Control ............. 22 3.2 Stability Analysis ............................. 25 3.3 Event-Transparency Analysis ...................... 27 3.4 Event-Synchronization Analysis ..................... 36 4 MODELING OF EVENT-BASED REAL-TIME TELEOPERATION SYS- TEMS 39 4.1 Modeling and Analysis .......................... 39 4.1.1 Teleoperation System: Single Operator and Single Robot . . . 39 4.1.2 Tele-cooperation System: Multi Operators and Multi Robots . 47 vii ..1 .4 .r. .I. .LJ .1. .I ~f 4.2 Petri Net Model .............................. 4.3 Comparison with Other Models and Approaches ............ 51 60 DESIGN OF EVENT-BASED REAL-TIME TELEOPERATION SYSTEMS 63 5.1 Design of Stable and Event-Transparent Teleoperation Systems . . . . 5.2 Design of Event-Synchronous Teleoperation Systems .......... MULTI—ROBOT TELE—COORDIN AT ION 6.1 Modeling and Analysis of Tele-coordination Systems .......... 6.2 Control of Tele-coordination Systems .................. IMPLEMENTATION AND EXPERIMENTATION 7.1 General Hardware Architecture and Specifications ........... 7.2 General Software Architecture and Specifications ........... 7.3 Specification and Performance of Internet Connections ........ 7.4 Mobile Robot ............................... 7.4.1 System Implementation ..................... 7.4.2 Experimental Results ....................... 7.5 Mobile Manipulator: Velocity Control .................. 7.5.1 System Implementation ..................... 7.5.2 Experimental Results ....................... 7.6 Mobile Manipulator: Position Control .................. 7.6.1 System Implementation ..................... 7.6.2 Experimental Results ....................... 7.7 Multi-Site Multi-Operator Tele-Cooperation .............. 7.7.1 System Implementation ..................... 7.7.2 Experimental Results ....................... 7.8 Multi-Site Multi-Operator Tele-Coordination .............. 7.8.1 System Implementation ..................... viii 63 66 78 79 85 87 87 105 127 129 132 134 138 142 145 ._ X (m ff) ; .. r‘ Or) 0‘) (1') CO BIBLE. 7.8.2 Experimental Results ....................... 148 7.9 Tele-autonomous Control of Robot Formations ............. 152 7.9.1 System Implementation ..................... 155 7.9.2 Experimental Results ....................... 157 7.10 Supermedia Enhanced Teleoperation .................. 158 7.10.1 System Implementation ..................... 159 7.10.2 Experimental Results ....................... 161 7.11 Micro Manipulator ............................ 167 7.11.1 System Implementation ..................... 168 I 7.11.2 Experimental Results ....................... 169 7.12 Summary of Experimental Results .................... 171 8 SUMMARY, CONCLUSIONS AND FUTURE WORK 172 8.1 Summary ................................. 172 8.2 Conclusions ................................ 174 8.3 Future Work ................................ 176 BIBLIOGRAPHY 178 ix 4.1 4.2 4.3 7.1 LIST OF TABLES The explanation of the various variables in the teleoperation systems. 40 Explanation of the various variables in Figure 4.3 ............ 48 Explanation of the various variables in Figure 4.4 ............ 49 Specifications of most of the hardware used. .............. 88 (.4 4.3 H 1.1 2.1 2.2 3.1 3.2 3.3 4.1 4.2 4.3 4.4 4.5 4.6 4.7 4.8 4.9 LIST OF FIGURES The general architecture of supermedia enhanced teleoperation systems. An illustration of the Internet topology ................. Inter-arrival times of groups of packets on the August 1989 Bellcore ethernet trace [39] [40] ........................... The comparison between traditional time-based and event-based plan- ning and control. ............................. Illustration of the buffering effect of delay in time-based planning and control, and the elimination of this effect in event-based planning and control. .................................. Sampling of signals under different conditions. ............. Block diagram of a traditional teleoperation system ........... Detailed block diagram of some of the event-based teleoperation sys- tems developed ............................... Detailed block diagram of a general multi-operator multi-robot tele- collaborative control system ........................ Block diagram of the multi-operator mobile manipulator teleoperation system implemented. ........................... Petri Net model of the system shown in Figure 4.2. .......... The coverability tree of the model shown in Figure 4.5 ......... Petri Net model of the system shown in Figure 4.4. .......... Reduction rules used to transform the model into a simpler one (a) F u- sion of series places (b) Fusion of series transitions (c)Fusion of parallel places (d) Elimination of self-loop places ................. The reduced form of the model shown in Figure 4.7 ........... 4.10 The coverability tree of the reduced model ................ xi 6 18 19 23 29 40 40 48 49 52 54 55 56 57 57 ~‘ ~I. ~.( 4.11 4.12 5.1 5.2 5.3 5.4 5.5 5.6 5.7 5.8 6.1 6.2 7.1 7.2 7.3 7.4 7.5 7.6 7.7 7.8 Network representation of teleoperator. The different terms are as explained in Table 4.1 ........................... Transformer ................................. Petri Net model of a typical non event—synchronous teleoperation system. The equivalent reduced model of the system shown in Figure 5.1. Petri Net model of the controlled net ................... The coverability tree of the net shown in Figure 5.3 ........... The modified net required to satisfy fairness ............... The coverability tree of the net shown in Figure 5.5 ........... The Petri Net model at different design stages for an event—synchronous multi-operator multi-robot teleoperation system ............. The resultant net from reducing the net shown in Figure 5.7 and its coverability tree. ............................. The general structure of a multi-robot tele-coordination system. Two mobile manipulators moving an object. .............. The hardware architecture shared by most of the systems developed. The remote temperature sensing and rendering system architecture. . The architecture of the temperature rendering device developed. . . . The software architecture shared by most of the systems developed. . The XR4000 mobile robot ......................... A top view of the robot that gives the distribution and numbering of the different sensors. ........................... Flow chart for deciding velocity based on sensors. ........... The fraction of velocity that is deducted based on the distance d in meters to the closest obstacle. ...................... xii 61 61 67 71 71 72 73 74 75 79 80 88 90 90 92 95 97 ,\c -‘ 0.. (H. 1 \I. 1 his 7. .. .. 2 2 o. a. .u. ~.l ~l s]. arm I 7.9 7.10 7.11 7.12 7.13 7.14 7.15 7.16 7.17 7.18 7.19 7.20 7.21 7.22 7.23 7.24 The two safety regions around the robot according to the direction of motion. .................................. 98 Round trip delay in milliseconds for packets transmitted between the robot and the MSU station. ....................... 99 Round trip delay in milliseconds for packets transmitted between the robot and the Hong Kong station ..................... 99 Round trip delay in milliseconds for packets transmitted between the robot and the Japan station ........................ 100 The frequency of control and feedback signals between the robot and the Hong Kong station ........................... 100 Round trip delay in milliseconds for packets transmitted between the robot and the Hong Kong station ..................... 101 The general structure of the mobile teleoperation system developed. . 102 The hardware architecture of the mobile teleoperation system. . . . . 103 Mobile server, Mobile/ Camera Client, and Camera server general flow chart. ................................... 104 The behavior of the system under normal operation with relatively far obstacles. ................................. 106 The behavior of the system under normal operation ........... 107 Comparison between the desired velocities and the sum of the actual velocities and the forces felt ........................ 108 Comparison between tracking error and force felt for normal operation. 109 The behavior of the system under 2 seconds of round trip delay. . . . 110 Comparison between the desired velocities and the sum of the actual velocities and the forces felt under 2 seconds of round trip delay. . . . 111 Comparison between tracking error and force felt for the 2 seconds delay operation ............................... 1 1 1 xiii 7.34 7.36 7.38 _ 7.39 7.25 7.26 7.27 7.28 7.29 7.30 7.31 7.32 7.33 7.34 7.35 7.36 7.37 7.38 7.39 The behavior of the system under random round trip delay, ranging from 1 to 4 seconds. ........................... Comparison between the desired velocities and the sum of the actual velocities and the forces felt with simulated random delay. ...... Comparison between tracking error and force felt for random delay Operation. ................................. The behavior of the system during the control from Hong Kong. . . . The behavior of the system during the control from Hong Kong. . . . Comparison between the desired velocities and the sum of the actual velocities and the forces felt for the two Hong Kong experiments. . . . Comparison between tracking error and force felt for the two Hong Kong experiments. ............................ The behavior of the system during operation from Hong Kong. . . . . Comparison between the desired velocities and the sum of the actual velocities and the forces felt for teleoperation from Hong Kong. . . . . Comparison between tracking error and force felt for teleoperation from Hong Kong. ................................ The behavior of the system during operation from Hong Kong with event-based reference as distance. .................... Comparison between the desired velocities and the sum of the actual velocities and the forces felt with event-based reference as distance. Comparison between tracking error and force felt for teleoperation from Hong Kong with event-based reference as distance. .......... The behavior of the system during Operation from Japan ........ Comparison between the desired velocities and the sum of the actual velocities and the forces felt for teleoperation from Japan ........ xiv 113 114 114 116 117 118 121 121 123 125 126 ‘ I 4‘ 7.4:? (.3 ' ~' 0' H 7.40 Comparison between tracking error and force felt for teleoperation from 7.41 General structure of a mobile manipulator teleoperation system. . . . 7.42 Hardware architecture of a mobile manipulator teleoperation system. 7.43 Velocity performance of the mobile manipulator teleoperation system. 7.44 Force performance of the mobile manipulator teleoperation system. 7.45 The architecture of a supermedia enhanced PUMA manipulator tele- operation system ............................. 7.46 Comparison between the continuous force sensed by the robot and the discrete force fed back both with respect to time. ........... 7.47 Comparison between the force fed back from the robot and the force received by the master both with respect to the event-based reference. 7.48 Comparison between the discrete force received by the phantom and the actual continuous force rendered by the phantom both with respect 7.49 Comparison between the desired position and the actual position both with respect to the event-based reference ................. 7.50 The structure of a multi-operator at multi-site collaborative teleoper- ation system. ............................... 7.51 Hardware architecture of the multi-operator at multi-site collaborative teleoperation system. ........................... 7.52 Multi-operator at multi—site collaborative teleoperation system behav- ior while being controlled from Hong Kong and Japan. ........ 7.53 Multi-Operator at multi-site collaborative teleoperation system behav- ior while being controlled from Hong Kong and Japan. ........ 7.54 The experimental setup. ......................... XV 127 128 130 131 132 135 136 137 138 139 141 143 T 3!} l TZTl {I 7.35 . {I I h- it? E) M3 Ti 7.34 Ii. “if 7.55 7.56 7.57 7.58 7.59 7.60 7.61 7.62 7.63 7.64 7.65 7.66 7.67 7.68 7.69 7.70 The architecture of a supermedia enhanced multi-mobile manipulator tele—coordination system .......................... The architecture of the multi-operator multi-robot tele-coordination system implemented. ........................... Tele-coordination results of two mobile manipulators moving a metal rod. .................................... Tele-coordination results of two mobile manipulators moving a metal rod while a human is applying direct forces to the rod. ........ Two mobile manipulators position tracking to the force applied by a human on the manipulated object. ................... 146 147 149 150 The general architecture of a tele—autonomous formation control system.153 The architecture of the tele—autonomous robot formation system de— veloped. .................................. Experimental results under tele-autonomous operation with obstacles The architecture of a supermedia enhanced teleoperation system. The architecture of the supermedia enhanced teleoperation system de- ve10ped. .................................. Performance of the system while being controlled from Michigan with no considerable delay ............................ Performance of the system while being controlled from Hong Kong with random delay. ............................... Performance of the system while being controlled from Hong Kong with random delay. ............................... The sampling and rendering of the video frames and the force with respect to the event-based action reference ................ The structure of a microcmanipulator teleoperation system ....... Hardware architecture of the micro-manipulator teleoperation system. xvi 156 158 159 160 162 163 165 166 167 169 7.71 Plots of the desired position increments and the force felt by the operator. 170 7.72 Comparison between the force felt and the one sensed. ........ 171 xvii N I Wit: New luff. iron. 'I'lv ‘ experzm » inane a: Akin ~'; «rigs (51.1.3. the use a:. every hog.» and com . to the 1m ' Diff? min. ,a‘ for the robots, rm. ~ teraéoerat‘ :I CHAPTER 1 INTRODUCTION INCE the days of Karel Capek’s R.U.R.(Rossum’s Universal Robots) play, which S was written in 1921, translated to English in 1923 and presented in London and New York, humans have been fascinated with robots. Robot being the word derived from the Czech robota, meaning “servitude, forced labor”. However, robotics did not experience a rapid growth until the seventies and eighties when enabling technologies became available. Although the robotic technology is young but it is growing at a fast pace. Roboti- cists claim that robots are analogous to computers. In which case, they estimate that the use and availability of robots is going to be wide spread. There will be robots in every house and business. Naturally as the case with computers, the remote operation and control of robots is going to be a highly desirable feature. So eventually, similar to the Internet there will be a robonet, where robots are connected to computers, other robots and even humans. For the time being, technology goes as far as allowing for the remote control of robots, referred to as teleoperation. Teleoperation takes several forms and can be done via any communication medium, wired or wireless. Recently the main focus has been on teleoperation via the Internet. Motivated by the availability, wide spread and low cost of the Internet, many researchers have focused on the Internet-based teleoperation. However, all real-time Internet-based teleoperation systems face the same main hurdle—delay. Although delay is present in any communication network, it has dis- tinctive characteristics over the Internet that make it more difficult a problem to deal with. For data transmission, delay is not as important an issue as for control trans- mission. Considering the Internet as an action super-highway, via which control and supermedia (video, audio, haptic, temperature and others) is being transmitted, time 1 u I dp‘df [it‘ll . _1’- 1' 0‘ frifi'I” ..‘.. . 1" 9"". " .‘. -A4 :I I I o.ry' I , " .9 (”Jim 5L..A Lu... ,. .'-.', J53. durum: "tub." - 3,, . “44.....‘f‘14ui E (Trop~' .' equLll U1 5‘1: 'Y‘IV-‘r :- , ‘ 39p '36! fl dill . Stems t I Elfff (1 l ‘ \ .— . Vlixxxj aid deim. l .\ '29 I“? .. Jyefdlor- “PIN _‘ arantr delay becomes an important factor in the stability, efficiency and safety of real-time teleoperation systems. In the past decade, considerable research has been done relating to control with time delay. However, all this research has several limitations that make it inapplicable to Internet type delay. Also most of this research concentrates on stability [1]-[8] with no considerable work on transparency [9]-[14]. Research even showed that robust stability and transparency can be conflicting design goals in teleoperation systems under time-based control [9]. In addition, synchronization in such systems has not been addressed. Most of the synchronization work found in the literature relate to multimedia synchronization in open 100p systems [15]-[20]. This dissertation presents a general framework for the modeling, analysis and control of supermedia enhanced real-time Internet-based teleoperation systems. This approach adopts event-based control to ensure the stability of bilateral teleoperation systems [6]. In addition, the transparency and synchronization of event-based systems are discussed. Event-transparency and event-synchronization concepts are introduced and their advantages are analyzed. Moreover, the modeling of real-time teleoperation systems is presented with Petri Net recommended as the tool for modeling, analysis and design. Design procedures to ensure, stability, event-transparency and event- synchronization are detailed. The advantage of these features is illustrated using the developed concept of multi-robot tele-coordination. Experimental results of the Internet based teleoperation of several developed sys- tems verify the analysis. In these experiments, supermedia is fed back from the environment in the form of force, video and/ or temperature. These are rendered to the operator in their original forms. It is shown that this approach results in a stable, event-transparent and event—synchronized systems regardless of time delay. The coming subsections will detail the applications and motivations for teleOp- eration in general and supermedia enhanced teleOperation. The challenges that face to 7. l {P.‘(i. “(:5 34:111.? P); Quay, L assistant ’ Tt‘zp-Ku‘. ' [f"‘"\ - “u“.g It; (r , . l plfllng. . “vibe g teleoperation and supermedia enhanced teleOperation are also presented. Then the detailed literature relating to all the aspects of this research are surveyed, pointing out their limitations. 1.1 Applications and Motivations Teleoperation, which is simply the remote control of a machine, is not a new concept. Because of its many benefits, such as increasing the reachability and safety of humans, it has been extensively researched and studied. The interest in teleoperation steams from this technology’s many applications some of which are: 1. Tele-medicine(remote checkup and surgery): Despite all the skeptics, this ap- plication is getting major attention from the research community and is expe- riencing significant advances [21]-[23]. The ultimate goal is to have a highly reliable and accurate system that can be trusted with human life. This will allow experts to treat and to operate on patients who are thousands of miles away. Uses range from trauma care and home care (elderly and disabled remote assistance) to physical therapy and specialized surgery. 2. Tele-manufacturing: The ability to use sophisticated and expensive manufac- turing facilities by several users around the world is very cost effective [24]. 3. Exploration (sea and space): This application became very popular during the NASA Mars Pathfinder Mission [25]. There are various obvious advantages for this application and the importance of teleoperation for space and deep sea exploration is clear, especially in the safety and cost reduction aspects [26] [27]. 4. Hazardous material manipulation: One of the first noticed applications of tele- operation is handling hazardous material; such as, radioactive, chemical or ex- plosive substances [28]. For example, teleoperation can be a natural answer for - T6294 (’5' e 00 -v . & K109. If 10. 11. chemical spill cleanup. . Teleoperated games (entertainment): Several teleoperated games are found on the Internet; for example, in Germany there is an interactive railroad that can be Operated remotely. The entertainment industry was one of the first to notice the importance Of networking thus strived to link players all over the world with networked games. Now there is the possibility to link several players with several machines or robots. Tele-service (monitoring and maintenance): Machines and complete factories can be monitored from hundreds of miles away. These can also be maintained and controlled remotely. Law enforcement: Several law enforcement agencies already own robots that can be teleOperated to replace humans in hazardous situations and environments. Search and recovery: multiple robots can be used to cover unaccessible areas in search and recovery efforts. Several robots were used for assistance in the recov- ery efforts at “Ground Zero” shortly after the World Trade Center catastrophe [29]. Tele—commerce (sales and demonstration): Customers have the ability to re- motely test a machine before buying it with no need for traveling or shipping. Education: Several robots are used online for educational purposes whether in a formal or informal setup [30]. Imagine! Teleoperation’s application is limited by the imagination. The In- ternet is a living proof that if you “build it, and they will come”; create the technology and people will imagine applications for it [31]. v. v-r ' ' ‘ I g ‘. LOO C. . ‘ 1 - I‘JIJI_IL . }1 ‘ a»; ., au“ La:.f’.“ .L ' “'1 v: Q. . ‘u-hdmh. T . and Eng»..- V ..- b‘ Ité‘ III 1‘ Regardless of the application, the human experience during teleoperation does not quiet compare to direct interaction because of the limited remote sensory capabilities. To enhance the experience, supermedia feedback is suggested in the next section. 1.2 Supermedia Enhancement From the point of view of an operator’s experience, teleoperation is far from being close to the direct Operation. This is due to the fact that the operator’s remote sensing capabilities are limited, which disconnects the Operator from the environment and the task. It was shown that Operators’ performance is improved in teleoperation when having haptic feedback [4] [32] [33]. “Of or relating to or proceeding from the sense of touch”, such or a slight alteration is the definition found for haptic. Derived from the Greek “haptikos” meaning “to touch”, haptics is divided into two major categories Of sensory information: tactile and kinesthetic. Tactile sensing includes temperature, skin curvature and stretch, vibration, slip, pressure and contact force. It gives a perception of surface textures and geometry. Kinesthetic information relates to the physical forces applied to and by the body. This includes sensing and awareness of body position and orientation, which is also known as proprioception. When haptic feedback is present, the operator is said to be kinethetically coupled to the environment, and the teleOperator is said to be controlled bilaterally. The most common form of haptic feedback used in teleOperation is force. However, there are many applications which require additional types of feedback for feasibility and safety. An example is classification and identification, where temperature is required to identify certain Objects and materials [34] [35]. Also in some medical applications the temperatUre change of the organ operated on is critical for safety. Therefore, supermedia enhanced teleoperation systems, similar to the one shown in Figure 1.1, are required. Supermedia is used to describe the collection of all the 5 ,-. Commands ,— l ....................... ‘~: ~. Internet . : ___________ 1. unmet--- ,3" Video. Haptic & Temperature?” Sensors: Forcefl‘orque Sensor , 0‘“ I I “In -, g f’ 4. if I? 9 1., ,__Intem9_t _________ . [uuumlh'teuniétm TcmpcratureSensor _..__-----i------_--..‘", Video. Hapuc & Temperature Hong Kong Figure 1.1: The general architecture of supermedia enhanced teleoperation systems. feedback streams, such as haptic, video, audio, temperature and others. Supermedia enhances efficiency, increases the Operation’s safety and provides telepresence, which is the human experience of existing in a remote location. Also experiencing supermedia feedback is an unusual and fun experience which makes it a very attractive feature in entertainment applications. Teleoperation in general, can be carried out using any kind of communication media. But since the Internet had matured into what it is today, interest has been in teleoperating via the Internet. The use Of the Internet is motivated by several reasons; in comparison to a dedicated link, the Internet is less expensive (free in many cases), more publicly available, has world wide coverage and does not require any special hardware. However, with all these advantages comes several disadvantages, which make teleoperation via the Internet very challenging. These challenges are the topic Of the next section. p m.- z. _. _ n- ' " "-H ”rial P ‘2' 'W.'f‘.‘ ’1‘. Gus”... ' " no '101" V P.64‘. 5“ C-~ 4,“ x . 3v 7:. ~- («..f;‘3.41 5. features t-f '. kit of ti.» up Kid; "v HQWPWI. in Of 5W; new: The in. In Its CHI-Ti" such as tel. . [truer a m: I Community f is any I delr . ""3 Can I... a Commumr Orbit df’a“ _. , . 1.3 Challenges and Difficulties For teleoperation systems to become widely acceptable, several safety and perfor- mance specifications have to be met. Stability, transparency and synchronization are among the main features that are desired in any teleoperation system. The teleop- eration system should be stable under all Operation conditions. Transparency, which relates to how realistic the operation is, should be Offered. Also synchronization should be present between the Operator and the remote machine and between the different supermedia streams. In addition, reliability, safety and efficiency should be features Of the teleOperation system. All these features have to be guaranteed regard- less of the characteristics of the communication network used. However, ensuring these in teleoperation systems, specifically Internet based ones, is challenging. The main difficulties relate to the lack Of a general design method for such sys- tems. Also the uniqueness in teleoperation control systems is the existence of the human Operator in the control loop, which adds to the complexity of the design and analysis process. In addition, teleoperation systems are clearly characterized by non- determinism in the environment or the path, which makes some assumptions limiting. However, by far the main difficulties are caused by the communication characteristics of such networks especially the Internet. The Internet does not offer any guarantees for bandwidth or delay. The Internet, in its current form, was not designed and implemented having in mind applications such as teleoperation. Although many changes are being proposed to make the In- ternet a more diverse communication medium, it still wont be able to guarantee communication parameters. As any communication link, the Internet introduces delay in the system. This delay can be due to several reasons: propagation, congestion, or low resources. For a communication link between the ground station and a space telerobot in low earth orbit delay is expected to be as long as 2-8 seconds [1] [27] [36]-[38]. The same 7 if. L.- Inai a... t»: a. pan .-;o-~' severa coy; *Y‘ Q '0 - u t'.."\" ‘4‘ Jfi'f'vv .1 o “§o-A4..;. J‘J d6. ‘- (‘dff 'Q«‘ C - k - I; LII S‘Yvhéra‘ r ,o I 0»: kw“ the R" 'x' will ”ft-ism (U 1 ‘h '49 [P’ Y“ "v: . .- da.§r|d:, 8’4"" W a}.. if:‘ x t J. bathing 1 1.4 51L, amount Of delay can be expected for deep sea operations. As for the Internet, the delay is unpredictable, no upper bound can be specified because of packet loss and it could have a very complex stochastic model. Several attempts have been made to model delay over the Internet, and several complex models have been derived [39]. In [39] a wavelet-based model1 was derived. This delay was shown to be self-similar in a statistical sense [40]. In addition, packets sent over the Internet can be lost and can arrive out of order, that is why TCP/IP communication protocol was used, since it ensures the arrival of all packets in order. More severely, Internet connections can be lost, that is why several design precautions have to be taken. All these communication features become significant difficulties in the face Of designing teleoperation systems that enjoy certain performance features. For instance, delay can render the system unstable if time-based control is used. It was mentioned in several references that delays in the order Of a tenth of a second destabilize the teleoperator [1]-[5] [9] [27] [32] [36]-[38] [41]-[45]. When haptic feedback is being used the system was stabilized only when the bandwidth was severely reduced. In addition to instability, desynchronization occurs between the master and slave and the transparency Of the system is lost. Several approaches have been suggested tO solve the above mentioned problems, and these with their limitations will be the topic of sections 1.4 and 1.5. 1.4 Overview Of Existing Methods and Approaches What follows is a summary of the different methods and techniques used tO overcome the problems faced in teleoperation in general. The different approaches will be presented in this section and section 1.5 will cover their limitations. The approaches are divided into two groups. One approach concentrates on the robotics aspect by lWavelet. analysis is most used for statistically self-similar processes. ,.or I" ”I I L1H”; “ ,u’ n 9", I " dy'r‘l'dl ‘. t . . .. rift ‘7" I») l" by ' : ' "i .. “.llfipf All ‘I . ‘ . if! 5.1.5471” .. J “ .\[”§I‘ Ll; :21. fact-«:1 '5 ‘l 7 Control The prcidvzg Emma] dis; - de','e]r;.pe’~i 'cIJi i In:~1€: j; i he shared t, operator an. for 36 III}: Model {\I‘xfl mzrrol and ,2 ’ ifflduced a [Tammission ~paration pr introducing control techniques and force/ image feedback with prediction. The other approach concentrates on the communication aspect by using new communication protocols or by modifying the existing ones, by predicting delay or by studying open loop multimedia communication. Note that no approach combined both aspects in the solution. 1.4.1 Robotics Approach Most Of the approaches in this category relate to control schemes and force/ image feedback with prediction. Some of them are a combination of several approaches. Control The problem is handled by changes in the controller’s modeling, design and algorithm. Several design techniques are used, some of which apply only to the specific system developed and others that are more general. In [46] [47] an observer for linear feedback control with delays is designed. [1] [27] use shared compliant control, where the control task is shared by both the human operator and the autonomous compliant control of the remote robot system. As for [36], time and position desynchronization was used and in [2] Virtual Internal Model (VIM) was prOposed. [3] used a design method based on the Hoe-optimal control and u—synthesis framework. The work of Anderson and Spong in [4] and [32] introduced a controller that was derived from modeling the network as a loss-less transmission line. Another method was based on stochastic control theory and a separation property [43]. ; - :~ ‘ .LL‘I dufilhflxit The commit: v r ! are me urn" ta’frarrrrance I ltd-ll l0 (‘iCISi-r f. ‘. udIIUI’I prI Sensory Feedback This approach includes force/ torque feedback and video feedback. Any of these fed back elements can be either real or virtual. Therefore, in the virtual case, some of the techniques use prediction. The most common feedback in teleOperation is video, where cameras capture the task space and then these images are sent back to the operator [1] [27] [37] [38] [41]. The other sensory feedback is usually force or torque, where the force sensed by the robot is fed back to the operator [4] [5] [32] [44] [45]. Although these two techniques usually use real values, sometimes prediction is employed in parallel. For example, the position Of the manipulator can be predicted using a simulation program and it is then displayed with the real image so that the Operator can compare [38] [41] [42]. Predicted force can also be used, where the simulated value is sent to the master or just displayed on the monitor [42]. 1.4.2 Communication Approach The communication approaches can be categorized in three main groups; ones which alter the underlying network or communication protocol in order to satisfy certain performance conditions, ones which attempt to predict the delay or the performance of the network, and ones which are more related to Open lOOp multimedia communication than to closed loop teleoperation. In the first category, approaches try mainly to replace the underlying communi- cation protocol or at least modify it. These methods are difficult to implement and test, since the routing algorithm and communication protocol used by all the different components in the communication system would have to be modified. Note that none developed a method that does not require major system modification; for example, an application level solution, where the routine that is handling the delay problem is running at the application level. 10 '~ "‘ r v- . . J; (Lamb‘s v If. [56 .2 “1",, v- ‘ *~J-~J.1H;;a . L939 CIEILII'I {A alrea" M be 8:53;»: One of the approaches proposes using rate-based flow control, where the rate at which the user is allowed to transmit is determined by the switches on the virtual circuit between the source and destination [48] [49]. Another approach used dynamic congestion control and error recovery algorithm over a heterogeneous Internet [50]. In the second category, many attempts are being made to predict the delay over the Internet specifically for teleoperation purposes. However, because of the complexity Of the models, very high levels Of accuracy are not attainable. Even highly accurate predictions are not acceptable if they do not guarantee %100 accuracy, because in teleoperation less than %100 stability is not acceptable. In addition, some of those approaches consider the delay to be the same in forward and feedback communication branches, or simply they predict the round trip delay or they do not scale or adapt to changes well [51] [52]. In the last category, the main work done is related to the synchronization Of multimedia over the Internet [15]-[20]. This research considers open loop systems, where control does not exist. The main topics are the synchronization of different fed back streams such as video and audio. The methods developed in this category can not be adapted as is because they do not consider control and stability issues. 1.5 Limitations Of Existing Solutions Most Of the methods used, share some disadvantages that will be discussed in this section. By far the most common limiting assumption, is taking delay as having an upper bound. An upper bound means taking a value for the delay beyond which the system would either become unstable or has to stop Operating. For example, the analysis and implementation might assume that the delay is less than 3 seconds, and if it happens that the delay actually exceeds 3 seconds the system either becomes unstable or has to stop Operating. Another limitation is not analyzing or testing with nondeterministic time delays. Also some approaches assume that the delay in the 11 ' .'. 9". Y t ‘k h“... I ' ,.‘m-I' dens...“ ' ' .\“or'q I if]? DIV-"i. :5 £13.33 aid O- 1.6 Pr Tsis Sttfir : and discs»: and Studied {iffy a mi~ robots to h. 3 “‘9b page. Internet rol . . | DUI) Ha a I". Examihi mil. communication loop is the same in both ways. All of these assumptions are limiting when the Internet in involved, since no upper bound can be obtained because of the possibility of packet loss and the delays are proven to be “random” and “non- deterministic” [39] [40]. Moreover, some methods are based on a specific model of the environment and / or the operator, which is a limiting factor considering the complexity of modeling a human and considering that teleoperation is usually in an environment unknown a priori. As for the predictive methods, they require significant computations and they do not function well with uncertainties in the environment. The communication proposed fixes, are difficult to implement and test since they require major changes in the existing systems. 1.6 Previous Internet Teleoperation and Their Limitations This section introduces the previous work done specifically in Internet teleoperation and discusses their limitations. Several Internet based robots have been developed and studied, presented here are some examples that cover most of the categories. TO clarify a misconception regarding Internet robots, many researchers consider Internet robots to be the ones which are web based, meaning the ones that are controlled via a web page. Many discussions have taken place2, and the general trend is to consider Internet robots as being the ones controlled over the Internet in any fashion and not only via a web page. Examining the literature, Internet-based robots can be divided into three cat- egories according tO the method used to send commands and receive haptic feed- back, and according to the nature of feedback supplied. The categories are: telepro- grammed, telesimulated, and real real-time teleoperated. The first category, teleprogrammed Internet-based robots, requires the operator 2IEEE International Conference on Robotics and Automation, Internet Robotics Workshop, 2000. l2 ' ‘ ; :‘L. .;:‘-I 11“ I ., an! 15 Pym . "I . a O (a; DP ..3 1:“ .‘r“ '1'.) 95762.5 1!. , 512:9 i- . Pal :: UPI. r mfjtir ' 9.11.11; \— Cldmf 1132}. I The w! 199d {Ont-a? ' ‘ ‘ affiljbu‘d is ("(u r. ‘ ”molar“. $2.1“ Aaajpd bl In the {Wm to upload a plan or set Of commands for the robot to execute. This uploaded program is executed by the robot either autonomously or semi-autonomously. This category can be further divided into “program and watch” robots or “program, watch and intervene” robots. Program and watch robots are ones that execute the program without allowing the Operator to change the commands on the fly. Program, watch and intervene robots have the added Option of the Operator changing the program on the fly before the robot finishes executing it. Typically, teleprogrammed robotic systems include only visual feedback in the form Of video. Some Of the most popular robots in this category are: c Tarn and Brady implemented a semi-autonomous telerobot, where the teleoper- ator supplies a trajectory for the robot to execute. Then the Operator intervenes only in case Of unexpected circumstances [53]. o Pai made an expensive experimental facility, the ACME, available via the Inter- net, where a user could conduct customized measurements and built accurate models from anywhere [54]. 0 Simmons introduced the autonomous mobile robot Xavier that can be com- manded to go to different locations and do different tasks. The commands are high level commands and the only feedback is in the form of video [55]. The second category, telesimulated Internet-based robots, includes systems that feed forward commands in real-time but the feedback is simulated. SO the Opera- tor sends commands in real-time however the feedback supplied to the Operator is simulated. This simulated or predicted feedback is in one Of two forms; either it is completely simulated based on a robot and environment model, or it is partially simulated based on a model and corrected by actual feedback. Generally feedback is in the form Of predicted display and/ or predicted force. 13 o Chong developed a system where the Operator controls a manipulator and is supplied by overlapping real and predicted display [56]. o Penin introduced a manipulator that can be teleoperated, where the Operator is presented with predicted display and force [57]. 0 Several other systems exist that are based on predictive display and predictive force; however, most of these systems relate to space and underwater robots, which are controlled via a designated communication link and are not considered Internet-based robots. The third category, real real-time teleoperated Internet—based robotic systems, feed forward commands in real—time and feedback real real-time sensory information. The feedback comes in several forms, the most typical of which are video and force. 0 PumaPaint allows the operator to control a Puma 760 in order to make a painting. The control is in real-time and the feedback is only visual [58]. o KhepOnTheWeb is a mobile robot that can be controlled in real-time and video feedback can be Obtained [59]. o WebPioneer is a system that allows the teleoperation of a Pioneer mobile robot via the web. Other web robots have been introduced, and currently the NASA Space Teler- obotics Program web site3 lists more than 20 Of them. These robots perform different tasks, ranging from manipulating Objects and navigation to manipulating camera views. All these systems have several limitations and differ from the systems presented in this dissertation in several aspects. First and most important is that these systems 3 http: / / ranier.oact.hq.nasa. gov / telerobotitspage / realrobots.html l4 ‘ . , , r CU 2‘ 't -- . us“"" "I ‘3) A(L‘» " L, “a... .. :2 2.»- ~-.« lflI‘JLIZ"... are awn: . ’ I I.‘I.rn'.'l..-I ‘A . ' I . ‘ . k ~ . 1L ..I> G» {W friaiit. do not have real-time feedback. Meaning that the only feedback the Operator has is visual, where either video or sensory information is sent back to be displayed. This is why these systems can be considered to be Open 100p control systems. Whereas in the systems discussed in this document, the loop is closed once haptic feedback is included, and therefore it is considered a closed loop control system. Other limitations are assumptions concerning time delay, no research was found dealing with real-time control with haptic feedback in the presence Of nondeterministic time delay. It is worth noting that most of the research found in the literature and discussed in this dissertation for teleoperation under time delay relates to stability, with only few relating to transparency and none relating to synchronization. 1.7 Outline Outlined here are the remaining parts of the dissertation: 1. Chapter 2 examines the Internet characteristics when viewed as a real-time con- trol link. The Internet characteristics are studied and their effects on feedback systems are discussed. The “randomness” or “non-determinism” of time delay and the instability that it causes are presented. 2. Chapter 3 gives an introduction to the event-based control, which uses an event- based and not a time reference. Then these systems are analyzed in terms of stability and in terms of event-transparency and event-synchronization, which are two concepts developed as part of this research. 3. Chapter 4 details the models of the different systems developed. Also it proposes Petri Net as a modeling, analysis and design tool for event-based teleoperation systems. 4. Chapter 5 Includes the design procedures required to ensure stability, event- 15 transparency and event-synchronization for teleoperation systems. A Petri Net design methodology resulting in event-synchronous systems is given. . Chapter 6 develops the concept of tele-coordination, where multi-robots are teleoperated by multi-operators at different remote locations. This chapter presents the design procedure required to achieve certain levels of coordination. . Chapter 7 provides the details Of the implementation and experimentation Of several systems developed. Details of the hardware and software used in the experiments are given. Also, the properties Of the Internet connections used in these experiments are presented. In addition, a detailed discussion about the different results obtained is included. . Chapter 8 gives a brief summary and concluding remarks. Also it includes recommendations for future work and research directions. 16 326‘ 27.4.2} I aim many mrmumh‘: ELflLN“ :srge thisq mwnsnr~h . u.” \ ’1 9L . u. .29 opt-r; SO. dt\:l_' M‘atzon [iv ‘. writ; - machines a: , nu V '¢.lee th-. mad}: - 4 Ldljns r‘. if. ' s :3 1313a: [‘- i L 'f‘ - propem, and ' J 3291! eff, CHAPTER 2 THE INTERNET: REAL-TIME CONTROL MEDIA During the past decade, two technologies, fed by the great advances in computa- tion and networking, matured into giant interdisciplinary scientific fields. Internet and robotics are two technologies that will greatly influence the human future. The Internet has already secured its place in our lives and proven its indispensable impor- tance. Robotics is yet to achieve that; while major advances have been accomplished and many robotic applications have emerged, it is clear that robots are not yet all what many roboticists predict they will be. Nevertheless, this field is experiencing worldwide interest from scientists, researchers and industries. Examining this potential, it was not surprising that many researchers are trying to merge these two phenomena and develop Internet-based robotic teleoperation. More interestingly, is Internet-based bilateral teleoperation, where supermedia is fed back to the Operator in order to increase efficiency and achieve telepresence. SO, despite the many challenges it presents, the use of the Internet as a commu- nication link for telerobotics is not surprising. The Internet can connect any two machines anywhere in the world or in space, that is why it is a natural media for real-time teleoperation. But once real-time control is considered there are several conditions required to ensure the stability and robustness Of the system. The prob- lem is that the Internet does not satisfy many Of these conditions. In the next section the properties of the Internet from a real-time control perspective will be presented and their effects on the teleoperation system will be examined. 2.1 Characteristics Of Internet Communication First discussed are the services that the Internet provides for teleoperation. Consid- ering the communication protocol TCP/IP the Internet can provide a reliable com- 17 Figure 2.1: An illustration of the Internet topology munication link. Reliable means that the link does not lose packets nor rearrange them. Many protocols do not Offer this, but when TCP/IP is used it implies that the packets are given in order and that no packets are lost (as long as no disconnection occurs). The use Of TCP/IP is not limiting since a similar result can be Obtained by using any other communication protocol while having the application guarantee that no packets are lost or rearranged. What the Internet does not provide is a guarantee of service. In other words, the Internet does not provide a permanent communication link, since the connection may be lost at any time. Nor does it provide a guarantee of the quality of service—the Internet does not offer guarantees on communication parameters, such as bandwidth and delay. This lack of guarantees is a result of the Internet being designed and implemented as a general communication medium, where applications, such as teleoperation, were not considered. Also, the lack of guarantees is an intrinsic property of the Internet, which becomes clear when its topology is considered. Figure 2.1 depicts a typical 18 Ln"! ... ‘IH‘ts Irnincatr-an V unu- Gun .a m I l .5 a) I L ." 1‘ l L .A N l l d 9 a: Intorarrival Time for 100 packets .0 a .0 a .° to o l l 1 1 l l l L l 0 1000 2000 3000 4000 5000 6000 7000 8000 9000 10000 Packet Group Number Figure 2.2: Inter-arrival times of groups of packets on the August 1989 Bellcore ethernet trace [39] [40]. Internet model, which consists of heterogeneous hardware and communication proto- cols. This model includes wired and wireless networks with different types of routers and communication links. SO packets traveling between two hosts via the Internet can go through diverse types of routes offering completely different communication qualities. Even with the current suggested changes in the Internet and the many advances in the Internet protocols and topologies, it will always be a diverse network that does not Offer assurances regarding the quality of service. So no matter how good the local area network is, once the communication is over the Internet the link is subject to all kinds of risks. Disconnection can occur at any time and indefinite random delay can be faced; there is no upper bound on the delay over the Internet because of the possibility of packet loss. Therefore, the inter-arrival times Of packets cannot be estimated. For an example of this inter-arrival time Figure 2.2 gives a graph of inter-arrival times Of groups Of packets on the August 1989 Bellcore 19 ' V O‘Nujef ~u. -“ ':‘=- win; J1 ,. Du “'5 a... “51-44.. ethernet trace [39] [40]. This figure gives an idea of the randomness Of the delay over the Internet. Several studies have been made to model this delay and the traffic over networks. These studies have showed that network traffic often possess long—range- dependency (LRD) and can be modeled using fractal and multiplicative multifractal processes [39] [40] [60]-[63]. However, it is not possible to predict the delay based on these statistical models with %100 accuracy. These predictions are done with certain probabilities. The accuracy provided is not sufficient for teleoperation applications, since it is not acceptable, for instance, to guarantee stability with a probability less than 1. Even delay prediction studies, which are devised specifically for Internet- based teleoperation, have their limitations in addition to the lack of accuracy. Some consider the delay to be the same in forward and feedback communication branches, or simply they predict the round trip delay or they do not scale or adapt to changes well [51] [52]. This existence and randomness of delay in Internet communication has significant effects on the real time control, and more so when supermedia feedback is involved. 2.2 Effects Of Random Time Delay on Real-Time Control The main effects of these communication characteristics are: instability, loss of trans- parency and desynchronization. In 1966, the first work dealing with force feedback under time delay was done (Ferrell 1966). It was shown that delays on the order Of a tenth Of a second destabilize the teleoperator. It is worth noting that this is true for Internet teleoperation, where the delay faced ranges from 100 milliseconds to a few seconds, and for space and deep sea teleoperation, where the delay is'more or less fixed. However, the non-determinism of delay over the Internet introduces additional difficulties. Therefore, teleoperation systems via the Internet will be unstable unless a fix is found. In addition, perfectly transparent teleoperation in practice is not possible, espe- 20 (:&‘“l' " ; . m. . . 'P'. ' H ‘Al" 'Al o' '39"): I.‘ 9- n A v A) 0 U .. " V‘ -v if. d... i : . Ui‘ '.I.'.. ' I. . I" up,” . . . u.a'.’.. 13C fmvx The. 393%} yr. 13%;, cially under delay conditions such as these encountered over the Internet [9]. As for desynchronization, it is the most intuitive effect of delay, since the operator would have no idea regarding the current robot state. For example, with visual feedback, by the time the Operator sees an image it would be an old image of the robot. Making decisions based on the old image would be risky and would cause the system to go to a completely different state. So while the Operator thinks that the robot is in a certain location it would have already moved to a new one, causing the two to be desynchronized in time and space. Also, if multiple supermedia are fed back, desyn- chronization between them will occur. This adds to the Operator’s confusion and increases the possibility of errors. These delay effects are pronounced in time-based controlled teleoperation systems; however, they are not present in the systems designed and analyzed in this disser- tation. The reason for the immunity of the developed systems to delay is the use of event-based control. Event-based control for supermedia enhanced teleoperation systems is developed and analyzed in this dissertation. This control methodology results in stable, event-transparent and event-synchronized systems. The next chapter introduces event-based planning and control for teleoperation systems. The stability of such systems is also analyzed. In addition, the concepts of event-transparency and event-synchronization are developed and analyzed. 21 E\ CJ CHAPTER 3 EVENT-BASED PLANNING AND CONTROL FOR REAL-TIME TELEOPERATION Event-based planning and control for real-time teleoperation systems is introduced and analyzed in this chapter. A comparison of this approach with the time-based control approach is made. The analysis Of the stability of event-based controlled systems under random delay conditions is done. In addition, event-transparency is introduced and its implications on the performance of real-time control systems under random non-deterministic delay conditions are analyzed. Also, the concept of event-synchronization is develOped. 3.1 Event-Based Real-Time Planning and Control The instability, lose of transparency and desynchronization caused by time delay in time-based control systems are a result of using time as a reference by the difler- ent system components. In other words, the control and feedback signals are being sampled with respect to time. Therefore, intuitively if a non-time based reference is used these effects of delay would be eliminated or reduced significantly. In traditional control systems, the dynamics of the system are modeled by differential equations in which the reference is the time variable, t. In addition, the trajectory is usually a function of time. The general idea Of non-time based control is to model the system and the trajectory as functions of a non-time based variable, which is called motion reference or action reference. It usually is denoted by s and also called the event- based action reference [64]—[66]. The planning and control of the time-based and the event-based schemes are shown in Figure 3.1. The Action Reference block in Figure 3.1 acts like a clock for the system. It is a map from the output or state of the robotic system y to a scalar variable 3. The 22 am] y(t) Controller H Robot ]__’ (a) traditional control a (c) y(t) Non-Tin. 31.“ Action Rotoronco ‘ (b) non-time based control Figure 3.1: The comparison between traditional time-based and event-based planning and control. robot’s plan as well as the Operator’s commands can be described as functions of s. This reference is usually taken to be a physical output of the system such as distance or position. However, the event-based reference can also be a virtual one that does not correspond to any physical quantity. For example, it can be chosen to be the number of control cycles the system has performed. Event-based planning and control, which was first introduced in [67], has been used in many studies and several robotic applications [65], and has been proven to be very efficient in dealing with uncertainties and delay. This planning and control method was extended to bilateral teleoperation applications in [6] [68]-[70]. These extensions introduced some modifications over the initial event-based theory in order for it to be applicable for real-time teleoperation systems. The first main consideration is the fact that in teleoperation there is no predefined path. The path is generated in real-time by the operator based on the feedback received. The other issue is that the Operator has limited computational power so the Operation of the system under event-based control should be transparent to the Operator. In addition, the control method has to be independent of the Operator and the environment models, and it should not require significant overhead. 23 u S w. .1 {pr c is sl “.5“ r Time=t+m Time=t+m+n =11 S=n+l Internet ] Vt+m Vt-I-m-l Vt+3 Vt+2 Vt+l Vt Operator 5 Remote f Machine Ft+k Ft+k+ l Ft+m-2 l:H-m- l Ft+m Vt+m+n Vt+m+n-l Vt+m+2 Vt+m+l Vt+m Operator V Remote Machine Ft+j Ft+j+l Ft+m+n-2 Ft+m+n-l Ft+m+n Internet Vn Vn Operator 5 Remote ‘ Machine Fn F, V V Operator “+1 “i Remote % Machine Fa FD V 1 V l Operator M n+$ Remote % Machine F n F n V V Operator “+1 "+1, Remote % Machine Fri 13M1 V V Operator "+1 “1% Remote 4 Machine F n+1 Fn+l Time Based Event Based Figure 3.2: Illustration of the buffering effect of delay in time-based planning and control, and the elimination of this effect in event-based planning and control. Therefore, the event-based reference, 3, has to be carefully chosen such that no 24 overhead is required from the Operator. In addition, this reference has to be designed in such a way to eliminate the buffering effect of delay. This effect is shown in the time-based part of Figure 3.2, where it is clear in the time-based approach that the delay will cause many signal instances to be flowing within the network. SO by the time a new command, 14”,, that was generated as a result of a force, Ft”, arrives at the destination, the state of the machine would have changed due to the commands that were already buffered in the network, VH1, VH2, . . .l/,+m_1. This implies that when the robot gets the command Vt+m at time t+ m + n, it has already changed state and is being subject to the force F¢+n+m; therefore, the command would be a wrong one since it was a result of F¢+k and not F¢+n+m. This will cause instability, lose of transparency and desynchronization. In order to ensure that the right command is being received and executed the buffering effect should be eliminated. The communication and control model should be similar to the one shown in the event-based part of Figure 3.2. As shown, the transmission of new commands and forces is not specified by time. A new command, Vn+1, will not be generated and sent until the most up—tO-date state Of the robot, F", is received. At the same time, a new force, Fa“, will not be fed back until a new velocity, VH1, is received. This implies that there are no commands or forces within the network flowing thus the buffering effect is eliminated. Therefore, the use of event-based planning and control provides the teleoperation system the capability to cope with uncertainties and delays in real-time, without the need to re-plan or re—synchronize [64]- [66] . 3.2 Stability Analysis Several methods have been used to analyze the stability of teleoperation systems. Scattering theory [4] [32] and wave variables [5] are among the main ones. The main concept used in their analysis is based on proving the passivity of the system. A passive system can not create energy, and thus from a control point of view a passive system can not go unstable [4]. The approach presented in this research is slightly different. The main difference is the use of event-based control. Concerning the stability of the system under event-based referencing, the following theorem was proven in [64]-[66]: 25 :5 i0 Theorem 1 If the original robot dynamic system { without remote human/autonomous controller) is asymptotically stable with time t as its action reference; and the new non-time action reference, s=H(y) is a ( monotone increasing) non-decreasing func- tion of time t, then the system is (asymptotically) stable with respect to the new action reference 3. The only requirement is that the robot is a stable system, which means that the original robot dynamic system (without remote human Operator) is asymptotically stable with time, t, as its action reference. This would allow the use of Theorem 1 and proves the (asymptotical) stability of the system with respect to the new action reference 3, simply by proving that the new non-time action reference is (monotone increasing) non-decreasing function Of time, t. The advantage of this approach, which differentiates it from the other approaches in the literature, is that stability is proven independently Of the specific human Operator or the statistics of time-delay. It applies to a wide range of systems Operating in unknown environments. Therefore, stability is proven for event-based controlled teleoperation systems re- gardless of the time delay and the specifics of the system components, as long as it is shown that the action reference is non-decreasing function Of time. As for transparency and synchronization, they are relative performance measures Of teleoperation systems. The current technology does not allow for perfect trans- parency and synchronization in Internet based teleoperation if time-based control is used. The main limitations, which are communication related, are bandwidth and time delay. These limitations are augmented in the Internet since it does not Offer any guarantees on bandwidth or delay. Especially, that the Internet in its current form, was not designed and implemented having in mind applications such as teleop— eration. Therefore, there is a need for teleoperation systems to offer consistency in the performance under the dynamic communication conditions encountered, which is the topic of the next section. 26 3.3 Event-Transparency Analysis In the time domain, transparency is simply a measure of how closely the feel of interaction with the environment in teleOperation resembles the actual feel when the Operator directly interacts with the environment. Formally, perfect time-transparency is satisfied under the condition Z10) = Ze(t) (3-1) where Z¢(t) is the transmitted or “felt” impedance and Ze(t) is the slave environ- ment impedance. Impedance is defined as the ratio of force to velocity. Therefore, as) = 5:8] Zea) = (3.2) where F), and Vh are the force and velocity generated by the joystick respectively; similarly, Fe and V,, are the slave force and velocity respectively. It is intuitive that under time-based control perfect time transparency in teleoperation is not possible because Of delay [9]. However, in event-based control some type Of consistency in the control can be achieved. This consistency is event-transparency and is defined as, Definition 1 A control system is event-transparent if, fit—l}; = C 52—8] and Fe(s),d = Fe(s),,d, where C is a constant, 3 is the action reference, Fe(8)rd and Fe(s)nd are the supermedia sensed by the system under random delay and no delay conditions respectively. It is clear that this generalized definition can apply to any control system, where the F’s are the feedback and the V’s are the control signals. In the special case of teleoperation systems, F is usually force or some type of supermedia and V is the Operator’s command. To design event-tranSparent systems, first design the system to 27 be an event-based control system. That is the signals in the system are being sampled uniformly with respect to a reference, 3, other than time. In order to ensure stability of the teleoperation system, this action reference has to be a non-decreasing function of time [6]. Being an event-based control system gives where a, 6 and C are simply scaling factors. In addition, 3 has to ensure that Fe(s),d = Fe(s)nd. The choice of 3 depends on the type Of feedback being used. From the definition, it implies that if a system is event-transparent then regardless Of random time delay, the control received under random time delay as a function of the action reference, V¢(s)rd, is equal to the control received by the robot under no time delay as a function of the action reference, Ve(s)nd. This is deduced from the fact that the choice of s ensured that the haptics detected by the robot under time delay as a function of the event, Fe(s),d, are equal to the haptics detected by the robot under no time delay as a function Of the event, Fe(s),.d. This means that if the event-based reference is chosen in such a way as to make the forces equal in both cases then the control will also be equal. Thus the behavior of the Operator will be consistent with respect to the event-based reference and the system will be event-transparent. Therefore, the delay will have no effect on the control signal with respect to the event-based action reference. Obviously, in the time-based case this property is not true. To illustrate the implications of event-transparency and to compare it with time- transparency, Figure 3.3 shows the sampling of the different signals under different 28 NO DELAY Time-Based Event-Based A Control A Control Force Detected w.r.t. Time I l 4' I I , Felt t Fe A t Force Detected w.r.t. Event th s an S Force Rendered w.r.t. Time I L g l 1 g Fh‘] t F1,“ t Force Rendered w.r.t. Event v, S V,“ 5 Control Generated w.r.t. Time I l g I I g t L t v ‘l Control Generated ” Vb] w.r.t. Event _ A E A 3 Control Receivedv" v° w.r.t. Time ' ' e e V CA I ve I Control Received w.r.t. Event (Ml S RANDOM DELAY Time-Based Event-Based A Control A Control F, F,‘ 1 1 l 1 g I l u g s.» S F.“ 8 II I I> ll 1 11> E.“ t Pb] t WA 3 vhll S [ I l I 1 g I 1 1 11> t t Vh Vh l E A 5 V! V. l l - 1 1 u ; VCA t Vc t V (I) {D Figure 3.3: Sampling of signals under different conditions. 29 conditions. In the first column are the signals in the time-based control case and in the second column are the signals in the event-based control case both under no delay conditions. In the third column are the signals in the time-based control case and in the fourth column are the signals in the event-based control case both under random delay conditions. The first and second rows are the force detected or sensed by the robot, Fe, with respect to time and event references respectively. The third and fourth rows are the force rendered by the joystick or applied on the Operator, Fh, with respect to time and event references respectively. The fifth and sixth rows are the Operator’s control or commands generated, V,,, with respect to time and event references respectively. The seventh and eighth rows are the control received by the robot, V2,, with respect to time and event references respectively. It is clear in the first column that the sampling is uniform with respect to time for all the signals. Although not uniform with respect to the event-based reference but is the same for all the signals. The Opposite is true for the second column, that is the sampling is uniform with respect to the event-based reference and non- uniform but the same with respect to time. This uniformity in the sampling of the transmitted and received signals (Fe and Fh, Vh and V6) and in the sampling Of the received feedback, Fh, and the generated control, Vh, ensures that the control being received by the robot corresponds to the feedback generated at the same reference (time or event-based control). This property translates into stability, transparency and synchronization for the system under no delay conditions. However, in the random delay case, time-based control loses this uniformity in the sampling as seen in the third column, because the random delay is causing non- uniformity in the arrival of signals (Fh(t) and I/;(t)) with respect to time. This implies that the control received by the robot, Ve(t), does not correspond to the feedback gen- erated, Fe (t), at the same time instant. Therefore, desynchronization occurs causing instability and the loss of transparency. On the other hand, event-based control over- comes this problem by sampling the signals with respect to a reference or parameter other than time. As column four illustrates, F1, and Vh with respect to time and event are sampled at the same reference values. Also, examining time-based control with delay shown in column three, it is clear that not all the signals are uniformly sampled with respect to any one reference. Whereas, in event-based control with delay shown in column four, all the signals are uniformly sampled with respect to the event-based reference. This uniformity in sampling results in Fh(t) and Vh(t) being sampled at 30 (I) the same time instances as seen in column four, which results in the consistency of control under random delay conditions. This consistency can be formally stated with the following theorem, Theorem 2 If the event-based action reference, 3, of an event-based control system is chosen such that Fe(s),d : Fe(s),,d then Ve(s),.d : Ve(s),,d, where Fe(s),d and Fe(s)nd are the supermedia sensed by the system under random delay and no delay conditions respectively, and where Ve(s),d and V;(s)nd are the control received under random time delay and no time delay conditions respectively. Proof. First it will be shown that Fh(t) and mm are being sampled at the same time instances. To prove this, Fh and Vh with respect to time and event references are analyzed for event-based control and time-based control under time delay conditions. In event-based control, assume that the deviation of the nonuniform samples from a set of uniform samples is represented by 61,, and 62,, and the event sampling periods are $1 and 5'2. So the samples are Fh(nK.S’1 — 61") and Vh(nKS2 — 62"), where K is a scaling factor. In order to determine the relationship between the sampling instances of signals Fh(t) and Vh(t), the signals 01(3) and 02(3) are formed [71] [72]: 91(3) 2 ; 51nSiZZUS(S_—n;§jl) (3-4) _ 0° sin w(s - nKS'g) 02(3) 322062. w(3_nK52, (3.5) Sample values of 01(3) and 02(3) are 61,, and 62". 01(nKSI) = (51" and 02(nKSg) = 62" (3.6) 31 a >—4 £1» Let t1 2 8 — 91(8) (3.7) t2 = s — 92(3) (3.8) It follows that ifs=nKSl, then t1 : KnSl—61(nKSl) I K7151 — 61" (3.9) if s = nng, then t2 = Kn82 - 62(nKSQ) : K7152 — (52” (3.10) Therefore, the transformations in eq.(3.7) and eq.(3.8) transform the points nK31 and nK32 Of the s-axes into the points K n81 — 61,, and K n32 — 62,, Of the t-axes. Considering that the sampling with respect to the event, 3, has the same period for all the signals, that is 51 = 52. Assuming that V. is generated instantaneously once Fe is rendered, that is 61,, = 62“. Then, K nSl — 61,, = K n52 — 62,, and the sampling instances of signals Fh(t) and Vh(t) are identical. A similar analysis can be made to show that the sampling instances of signals Fe(t) and V;(t) are identical. For the time-based control case, since the time and event sampling periods are not the same for either Fh (t) and Vh(t) or Fe(t) and V;(t), then similar conclusions cannot be made. This conclusion implies that, in the event-based control case if the system is event- transparent then, 173(8)”; 2 176(8),“; => Fh(3)rd = Fh(3)nd (3.11) Assuming that the Operator’s reaction is consistent based on the force felt, then 32 eq.(3.11) gives Vh(3)rd = Vh(8)nd => 143(8)": = Ve(3)nd (3.12) 0 Thus the system’s control commands are the same regardless of random delay as stated previously, implying that its performance with respect to the event-based action reference is transparent to delay and its randomness. Since the event-based action reference can depend on any system output, which is a non-decreasing function of time [6], this property implies that consistent system behavior with respect to a certain system output can be achieved. However, special care should be taken in designing the event-based action reference. It has to ensure that Fe(s) is the same regardless of time delay. For example, Fe(s) can be a function of the distance the robot moved; therefore, the event-based reference, 3, can be chosen to be the distance traveled. The relationship between event-transparency and time—transparency for event- based control systems can be described by the following theorem: Theorem 3 An event-transparent system under no delay conditions preserves time- transparency. Proof. To start, the signals Fe(t) and Fh(t) are examined under no delay condi- tions. Assume that the deviations of the nonuniform samples Of these signals from uniform samples are 6”. These deviations are a function Of the system output and thus the commands, 143(3). In the general case, 6,, will also be a function of time delay. However this analysis is done for the no delay case; therefore, 6,, are only a function of the system output and they are the same for both signals. Assuming that 33 es. the sampling period with respect tO the event-based reference is S define the function sin w1( s — nKS) 3.13 $115206" w1(s—nKS) ( ) Sample values of 0(3) are (in, 0(nKS) = 6,, (3.14) The transformation between the s-axis and t-axis is t :- s — 0(8) (3-15) It follows that ifsanS, thent : KnS—0(nKS) 2 Kris — 6,, (3.16) Since for all m and n, if sm < 3,, :> tm < tn and since two events can not occur at the same time instant then eq.(3.15) is a strictly increasing function of s and its inverse exists: 3 : 7(t) (3.17) Define the functions 9,.(3) s Fnls — 6(3)] = as) (3.18) 96(3) E Fe[s - 9(3)] = Fe(t) (3.19) 34 => gh(nKS) = Fh(nKS — 6n) and ge(nKS) = Fe(nKS — (in) (3.20) (321) Hence, gh(s) and ge(s) are known at a sequence of equidistant points 3 = nKS. If these signals are hand limited by w then S can be chosen such that Shannon’s sampling condition is satisfied: and then the following equations will hold: gh(8)= Z Fh(nKS—6n)8inwl(s—nKS) n=_°o w1(s — nKS) _ °° sinw1(s—nKS) ge(s) — fizz—00 Fe(nKS (5n) w1(s _ nKS) and therefore, Ft“) = 91.1700] sin w1(7(t) — nKS) = i F),(nKS—6,,) nz—oo Fe(t) = ge[7(t)l w1(7(t) - nKS) sin w1(’y(t) — nKS) 2: f: Fe(nKS—dn) n=—oo w1(7(t) ‘ nKS) From eq.(3.3) it implies that F),(nKS — 6,.) = aFe(nKS — 5,.) 35 (3.22) (3.23) (3.24) (3.25) (3.26) (3.27) 99 . u d._‘ >—v—< .4 . . r on” 1“ which implies that for the no delay case, an event-based control system results in F),(t) = aFe(t) (3.28) Similarly, V110) = (Bl/At) (329) which results in 2, as) : Cam (3.30, Therefore, an event-transparent system under no delay conditions preserves time— transparency. 0 However this is not true for the random delay case because the 6,, would be a function of delay. Thus eq.(3.27) would not hold because F), and Fe are not deformed similarly. Note that all this analysis is done independent Of what F and V are, which implies that this analysis applies to any kind of feedback and control commands used. SO event-based control systems, which are event-transparent, are able to preserve time-transparency under no delay and are consistent in their control. On the other hand, time controlled systems are not able to ensure any consistency when faced with random delay. 3.4 Event-Synchronization Analysis As for synchronization, research found in the literature has mainly focused on time synchronization in open loop systems [15]-[20]. However, the control systems pre- 36 'c ha an: sented in this research are closed loop event-based. Therefore, there is a need to define synchronization in this context. Event-synchronization is defined as: Definition 2 An event-synchronized system is one in which all the signals { control and feedback) in the system are always referencing the some event-based reference. This definition is similar to the definition Of time synchronization but instead of having the time as a reference the event-based action reference is being used. Also an important difference is that this definition includes the control signal too, which means that the control has also to be synchronized with the feedback. This implies that the event-synchronized system ensures that the feedback sensed is a reflection of the system’s most current state. From the definition it implies that systems have to be designed in a way such that all the supermedia streams being rendered have the same action reference stamps. However, this requirement might not be feasible just like perfect time-synchronization Of video and audio transmitted over the Internet is not. Therefore, in some cases; such as, force and video synchronization, a certain tolerance should be permitted. Typically, a master supermedia stream would be chosen and synchronization would be carried out with respect to it. For example, with force and video, force can be the master stream since it is more informative and requires less bandwidth. In this case a video frame will be displayed if its event-based stamp is within a certain predefined tolerance range from the force sample being rendered; otherwise, it is discarded. In addition, event-synchronized systems have to ensure that the control signal has the same action reference stamp as the feedback being rendered. Examining the event-based case in Figure 3.2, it is clear that the update of signals in both directions is not triggered by time. Also since the buffering effect of delay is eliminated then the haptic force felt by the Operator is the most up-tO-date one; there could not have been any change in the system state meanwhile because there are no commands flowing in the network. The same thing applies to the command received by the robot, it is 37 the most up—to-date one since there were no new haptics flowing in the network that could have generated new commands. This implies that the operator and the robot are always synchronized in event regardless of time delay and its variation. In the case of multiple remote operators and / or multiple robots, event-synchronization also ensures that the Operators are synchronized with each other and with the different robots. A fundamental difference between time-based synchronization and event synchro- nization is that the latter results in content or state-based synchronization. When synchronization is done based on the event stamp, which is a reflection of the state of the remote robot, it implies that the supermedia streams being rendered where sensed at relatively close system states. As for time-based synchronization, it results in synchronization with respect to time regardless of the system state. To compare, consider the case of a slowly operating system. In the time-based case, if a frame is slightly “old” it will be discarded although the state Of the system did not change significantly; however, in the event—based case the frame will still be displayed since the event or state of the system did not change considerably. On the other hand, in the case of fast Operation and movement, a relatively “new” frame will be rendered in the time-based case although the state of the system has changed drastically. The same frame will be discarded in the event-based case since what it shows reflects an “old” state of the system because of the quick changes the system is going through. Therefore, event-synchronization is done based on the content or information being rendered; where as, time-based synchronization is done based on when and not what is sensed. The next chapter covers the modeling of event-based real-time teleoperation sys- tems. The dynamic models of several general systems are given. In addition, Petri Net, as a modeling and analysis tool, is proposed for teleoperation systems. At the end a comparison with a popular modeling and analysis approach is included. 38 Ta, 1 .... CHAPTER 4 MODELING OF EVENT-BASED REAL-TIME TELEOPERATION SYSTEMS This chapter details two modeling techniques used for event-based real-time teleOp- eration systems. Two cases of the dynamic model are presented; the first one is of a general teleoperation system with a single Operator and single robot, the second one is of a general tele—cooperation system with multi-operators at multi-site controlling multi-robots. These two cases are also modeled using a new proposed modeling tech- nique for event-based real-time teleoperation systems. This technique uses Petri Nets as a modeling tool because Of its many attractive features, which will be discussed. In addition, a comparison is made between the models suggested in this research and other models used in the literature, concentrating on the one presented in [4]. 4.1 Modeling and Analysis In this section the dynamic models of two general event-based real-time teleoperation systems are given. The first is a model of a teleoperation system with a single Operator and single robot. The second one is a model of a general tele—cooperation system with multi—operators at multi-site controlling multi-robots. 4.1.1 Teleoperation System: Single Operator and Single Robot The detailed model Of a general teleoperation system with force feedback is discussed in this section. Figure 4.1 gives a general block diagram of a traditional teleoperation system and Figure 4.2 presents a general schematic diagram Of some Of the systems developed in this research. The modeling of the blocks in Figure 4.1 and Figure 4.2 will be discussed in details and each term is explained in Table 4.1. 39 CE'.‘ / / / Vm Human - Operator Fl: 11912 Xm(s),Z Vm(5),z V .7 Human I =5 Operator Master Fh Figure 4.2: Detailed block diagram of some of the event-based teleoperation systems developed. md: V ' desired ,. a . F ' force 6 R3: desired force e: contact 6 . contact F. E R3: contact force V 6 3133: set Table 4.1: The explanation of the various variables in the teleoperation systems. 40 (fix As Figure 4.1 shows, the general teleoperation system consists of a human Opera- tor, master (usually the master is a joystick), communication network, slave (usually the slave is a robot), and the environment. The same applies to the systems de veloped as part Of this research as shown in Figure 4.2 but with some modifications mostly in the connections. In what follows, each block in the system will be modeled giving the details of each variable and the equations that relate them. The case considered is velocity control using a 3 degrees Of freedom joystick. Note that the dynamic equations are in terms Of the event-based reference, 5, and that the robot is being controlled with 3 degrees Of freedom, which makes the variables 3 term vectors. Human Operator: This is the most difficult to model, but a spring-like behavior, which was shown in [73] and used in several places in the literature [74], can be assumed. So, despite all the internal complexities, the wrist has a spring-like effect. In addition, the human compensates for certain machine instabilities making the coupled human—machine system stable [75]-[78]. As force is felt, the Operator will generate a new joystick position according to the following: Fh(8) Xm(s+ 1) = K h (4.1) where K h is a constant and s is the event-based reference, 3 6 {1,2, - - - }. Xm and F), are - , _ .. me(s) Finis) Xm(3) = me(3) Fh(8) = FMS) (4-2) L Xmoisl . L Finals) .. As Table 4.1 indicates F),(s) is the applied force, which means the force that the 41 operator feels. Thus, the a: and y components are due to the force fed back and to the additional force required to get the joystick to the new location. Since force is not fed back in the 0 direction, this component is just a result of getting the joystick to the next location. As seen in eq.(4.1), Xm(s + l) is related to Fh(s), so Xm(s) at reference 3 + 1 is generated by the previous force at reference 3. This results in an event-based system where each event is triggered by the previous one. Master (Joystick): The dynamics of the joystick are M..V..m(s) = F.(s) + ads) (43) where Mm is the mass of the joystick handle, me is velocity Of joystick movement, F), is as described before and rm is the feedback from the robot, which would be the force rendered by the joystick. The result from these dynamics is the joystick getting to a new position Xm(s + 1). From this position the desired velocity Vm is derived according to 1471(3) : Kme(3) (4'4) where Km is a scaling constant, Xm(s) is as before and Vm(s) is the desired velocity Of the robot. The different vectors are P Vm(s) - P rm(s) ' Vm(8) = me(3) Tm(8) Z Tmy(3) (4.5) L Vm9(8) d - 0 _ Eq.(4.5) shows that rm9(s) = 0, since force is not fed back in the rotational direction. Communication Block (Internet): Resulting from event-based control, the com- munication link is simply a delay element that plays no role in the modeling of the 42 system. It is assumed that the Internet is simply supplying the link and in case this connection is lost, the system will simply stop awaiting the connection. Since the advance of time does not affect the system and only the advance of the event-based reference 3 will, when the connection is lost the system will remain stable and will resume action after the connection is re-established. This makes the system very robust since no initialization or resynchronization is required. Environment: Interaction with the environment can be modeled in two ways de- pending on the system being considered. The first case is the one involving the control Of a mobile robot and the second case is the one involving either a mobile manipula- tor, a manipulator or a micro controller. For the first case, the force is a virtual force which is based on the tracking error of the robot. Whereas in the second case, the force corresponds to actual physical force detected. For the first case, contact is not required to generate force. Instead, different sensors around the robot are used to detect obstacles. Based on the distance between the Obstacles and the robot the velocity is reduced, by that creating a tracking error, which is considered as a virtual force. This reduction in the velocity is done according to a function of the distance f (d), which gives a velocity V,,,(s) to subtract from the desired velocity Vm(s). SO the velocity set becomes 14(8) = Vm(8) - Vin(8) (4-5) where V,(s) is the velocity set to the robot, V,n(s) is the effect of the environment and Vm(s) is as before. Clearly, the desired velocity that the robot receives is less than the actual one, this slowing down is used to generate the virtual force to be fed back. Note that no force is generated in the rotational direction that is why V,,,9(s) = 0 and V39(s) = Vm9(s), as seen in eq.(4.7). 43 r3911.) is)? 1 WLIl. : 1 ts ’pr ...“ Cilhfi wt M . r0113}; H ' vets) ’ newts)“ V..(s) = 14,,(5) V..(s) = 14,,(3) (4.7) - Vmg(8) - . 0 _ For the second case, involving either a mobile manipulator, a manipulator or a micro controller, the force fed back corresponds to physical force detected by the force / torque sensor. SO the force is generated when actual contact is established with the environment. Slave (Mobile robot): The first case of a slave is the mobile robot. The robot receives V.,(s), it will be commanded to move with that velocity but would actually have Va(s) as its velocity. The robot will then calculate the velocity tracking error with respect to the original desired velocity Vm(s) and send that back to the master to be rendered as the virtual force due to the environment. SO rm(s) and the dynamics of the robot are Tints) = K.(Vm(s)—V.(s)) (4-8) M.I'/..(s) = F.(s)+r.(s) (49) 73(8) = -7V..(s)+KV...(s)-a;Fe(s) Vents) = V..(s)-Va(s) (410) runs)“ V..(8)= Vay(s) (4.11) .Vm0(s)_ where K., '7, K and a are constants, M, is mass of robot, Fe is the actual envi- ronment forces if any and usually assumed very small. r, and Vm are robot internal 44 controller terms. Eq.(4.8) shows that what the Operator is feeling is actually the ve- locity tracking error, which could be a result Of the robot getting close to an Object. This implies that from the direction and magnitude of the force, the Operator feels the location and the distance of the Obstacle tO the robot. The important point to note is that the operator will still feel force whenever the robot comes in contact with an Object. Actual contact that results in a decrease in Va(s) will result in a proportional increase in rm(s). Also, Vao(s) = Vm9(s) since Tmo = 0. Slave (manipulator): The second case Of a slave is a manipulator arm. The dy- namic model for the robot arm can be written as D(a) V...2(s) vms) ‘1’ —’ Master Slave Op?” 2 2 (I; 'F2(s) Vrz(S) Vn(S) T Vin2(8) N o o Va(S) ' Vm2(5) M O O I E N O I O T Site n Human an(s an(S) Vm(S) V...(S) —" Master Slave a?” .. . .. Vs(s an(5) Figure 4.3: Detailed block diagram of a general multi—operator multi—robot tele- collaborative control system. {6 . 6333: 6 at" e - V E 323: Feedback 6 . or contact Table 4.2: Explanation of the various variables in Figure 4.3. that the operators feel. As seen in eq.(4.16), Xm(s + 1) and Xp(s + 1) are related to Fp(s) and F,,,(s). SO Xm(s + 1) and Xp(s + 1) at reference 3 +1 are generated by the previous force at event 3, as discussed previously. 48 9'51! )Lm(s) Human Master ‘ Operator] 1 129(8) SiteI Site2 . )(p(8) Human Master Opemwrz 2 +41%“) Figure 4.4: Block diagram of the multi-operator mobile manipulator teleoperation system implemented. Block Our Variables Human Operator Fm, Fp 6 Sit Applied force Xm, XP 6 323: Joystick position Master Vm, V}, E 313: Velocity desired Communication Link Slave Va 6 323: Actual velocity Environment V1,. 6 323: Virtual contact V, 6 $23: Velocity set Table 4.3: Explanation Of the various variables in Figure 4.4. Master (Joystick): The dynamics Of the joysticks are Mmem(s) = F,(s) + Fv,(s) (4.19) Mpvmpa) = Fm(s) + va (s) (4.20) va(s) = Cme(s) (4.21) Fv.(s) = Cpl/19(3) (422) where Mm and M, are the masses of the joysticks’ handles, me and Vmp are the velocities of the joysticks’ movement, and F... and Fp are as described earlier. va and FV, are the forces played by the joystick, which are simply the velocities V... and V, 49 flit! the aft] mt: (its dist: fed back from the robots scaled by the constants Cm and 0,, respectively. This implies that the Operators feel each others intended velocities, this way they can collaborate more efficiently. The result Of these dynamics is that the joysticks move to new positions Xm(s +1) and Xp(s + 1). From these positions the desired velocities Vm and V1) are derived according to Vm(3) : Km1Xm(8) VP(S) : KpIXp(3) (4'23) where Km and Kpl are scaling constants, Xm(s) and Xp(s) are as before. Vm(s) and V,,(s) are the desired velocities of the mobile and the puma respectively. The velocity VGCtOfS are "14.43) ' "v.43 ’ Vm(8) = me(8) V143) = pr(8) (4-24) _ Vm0(3) . _ Vpois) d Communication Block (Internet): The same discussion that was given in the pre- vious section applies here. Environment: Sensors on the mobile robot are used to detect Objects. Based on the distance between the object and the robot, velocity is reduced. This is calculated according to a function Of the distance from the object, 0 S f (d) S 1, that is multiplied by Vm(s) to give a velocity value V,,,(s). V1,.(s) is subtracted from the desired velocity Vm(s) to give the velocity set for the robot V,(s): V,(s) = Vm(s) — V5,,(s) (4.25) As a result, the robot gets a velocity from the server that is less than the one desired by the operator. 50 C") :1 dc l v.48) ’ '14:...(5) - 14(3) = V,,,(s) 14;,(5) = V...,,(s) (426) . Vmg(8) - _ 0 .. Slavel (Mobile robot): The dynamic equations modeling the mobile were pre- sented in the previous section. Slave2 (PUMA manipulator): The dynamic equations modeling the manipulator were presented in the previous section. 4.2 Petri Net Model One of the difficulties of designing and implementing real-time teleoperation systems is the lack of a general modeling technique. There is a need for a model that can capture the concurrence, non-determinism and logical behavior of such systems. This model should be easily analyzed to study the underlying system performance. Prop- erties such as stability and synchronization cannot be easily studied using the current models. For this purpose, Petri Nets are suggested as an efficient way to model the concurrence and complexity of Internet based teleoperation systems [83]-[90]. Petri Nets have several attractive features, which are based on their ability to describe and study systems that are characterized as being concurrent, asynchronous, distributed, parallel, non-deterministic and / or stochastic. Thus, it is a very adequate and efficient tool for studying Internet based telerobotic systems. Although Petri Nets have been used before for modeling and studying robotic systems, that was limited to manufacturing and scheduling related problems [84]. In addition, based on this modeling technique, difl'erent properties of the system, some of which can be linked to physical properties, can be studied using Petri Nets analysis tools. Another attractive feature of Petri Nets is the ability to setup state equations, algebraic equations and 51 Client Server Ready to Send Wait for Velocit ’ 0 P1 p" Buffer Full Send Velocity 95 Receive Velocity . — Wait for Force Get Velocity p; p) I; 94 . . Receive Execute Velocity Force _ Q2 _ Force Received 0 Q ,, Force p3 Played p4 Play Force q7 Send Force _ qs Buffer Full _ Figure 4.5: Petri Net model of the system shown in Figure 4.2. other mathematical models governing the behavior of the system [83]. TO take advantage of these feature, Petri Nets are used to model event-based teleoperation systems. The general teleoperation system shown in Figure 4.2 is mod- eled in Figure 4.5 using a Petri Net, which is a directed graph that has an initial state (marking) [83]. As seen, this graph consists Of two kinds of nodes: places and transitions, where arcs are either from a place to a transition or vice versa. Places represent conditions and system state, such as being “ready to send”. Transitions represent events and actions, such as “play force”. Tokens, which are represented by dots in places, correspond to signals flowing in the system and their collection reflects the state of the system. TO capture the dynamic behavior of the system, the state of the system is changed according to the following transition (firing) rule [83]: -A transition, such as “send velocity”, is enabled if each input place, in this case “ready to send”, has a token in it. -An enabled transition may or may not fire (depending on whether or not the 52 event actually takes place). -A firing Of an enabled transition, such as “send force”, removes a token from each input place, in this case “velocity executed”, and adds a token to each output place, in this case “buffer full” and “wait for velocity”. This description clearly implies that Petri Nets are an adequate tool to model event-based systems, such as the one presented in [70]. Since as in event-based systems each firing is enabling the next one and no event can occur out of order. As for the dynamics of different system components, they are represented by transitions. For example, “play force” in the mobile client represents Mmem(t) = Fp(t) + va(t). Another strength of Petri Nets is their support for analysis of many important properties associated with the systems they model. These properties are divided into properties that are dependent on the initial marking (behavioral properties) and these properties which are independent of the initial marking (structural prOper- ties) [83] [85]. In this study interest is in the following behavioral properties: Boundedness: The model presented is k-bounded or simply bounded if the number of tokens in each place does not exceed a finite number k for any marking reachable from the initial marking. It is said to be safe if it is 1-bounded [83]. This implies that there is no accumulation Of tokens in places at any time implying that the system is stable [87]. Liveness: The model is said to be live if, no matter what marking has been reached from the initial state, it is possible to ultimately fire any transition of the net by progressing through some further firing sequence. This property guarantees deadlock-free operation [83]. This implies that, despite the non-determinism in the SyStem, there is no case that would cause the system to stop Operating normally. Several analysis methods exist which may be classified into the three groups: Cov- er ability (reachability) tree method, Matrix-equation approach, Reduction or decom- position techniques [83]. The first method involves the enumeration of all reachable 53 ( 1,0,0,0,0, 1000) q 1 (0,1,0,0,l,l,0,0,0) 95 (O, 1.0.0.0,0, 1,0,0) (16 (0.1.0.0.0.0.0.l.0) R (0.1.0.0.0.l.0.0.1) 92 (0.0.1.0.0. 1 .0.0.0) 93 (0,0,0, 1,0,1,0,0,0) M. old (1,0,0,0,0,1,0,0,0) Figure 4.6: The coverability tree Of the model shown in Figure 4.5. markings or their coverable markings. It can be applied to all classes of nets, but is limited to small nets due to the complexity Of the state-space explosion. The other two techniques are powerful but they can only be applied to special subclasses of Petri Nets. The general teleoperation system coverability tree is shown in Figure 4.6. From this tree it can be deduced that the system is bounded and live. Boundedness is deduced from the fact that all the nodes in the tree have ones and zeros in them. This implies that not only the system is bounded but also safe [83]. Which, in other Words, means that the number Of tokens in any place does not exceed one. In addition, the coverability tree conveys that the system is dead-lock free since all the markings in the tree have enabled transitions [83]. SO the system is both bounded (safe) and live. Since the system is safe it is clear that each place in the system can have Only one token at most at any point in time. And since the passage of tokens 54 Mobile Wait for Server Receive Velocity Velocity Received Execute Velocity Velocity Executed Q Buffer Full Send Velocity Mobile Client Eeady Send Receive Puma Force Force q Send Mobile orce Mobile Force Sent Wait for Velocity Buffer 9? Full Velocity Received Buffer Full Velocity 65 Executed Puma Force Sent Mobile Force Send Force Execute Buffer Q Full Ready to Send Send Velocity Velocity Puma Receive Mobile Buf"e 11 z ) SendFu . Receive Get Velocity Puma Client Get Veloci Wait for Force Force Force P Figure 4.7: Petri Net model of the system shown in Figure 4.4. 55 In a Place represents the advance of the event-based reference, 3, for that place and Slnce it is clear from the model that a place will not receive another token until all the Velocity Receive Received Puma Puma Force %% Execute <3 Force <2> Velocity Send Mobile Executed orce (a) (b) “85%” x (C) (d) Figure 4.8: Reduction rules used to transform the model into a simpler one (a) Fusion of series places (b) Fusion of series transitions (c)Fusion of parallel places (d) Elimination of self-loop places. other places have received a token then the system is event-synchronized. This will be formally stated and a detailed design procedure will be given in the next chapter. It is worth noting that in systems where delays are in the order of seconds synchronization becomes as important an issue as stability. Large random time delays would make any form of feedback, whether video or force, worthless if the different system parts are not synchronized. Since receiving feedback that does not reflect the most up—to-date state of the robot would result in the wrong control commands being sent. But the existence of random time delay makes time synchronization impossible to achieve. That is why event-synchronization was presented in [70]. Event-synchronization ensures that the feedback received corresponds to the most up—to-date state of the system and thus the right control signal would be sent. As for the tele-cooperation system with multi-operators and multi-robots, which is shown in Figure 4.4, it can be modeled using Petri Nets as seen in Figure 4.7. To facilitate the analysis of the Internet based multi-operator at multi-site collab- 56 P2 91 92 |_ Figure 4.9: The reduced form of the model shown in Figure 4.7. ( l ,l ,0,0, 1,0,0) % \ (0,0, 1,0, 1,0,0) (1,1,0, ,0, l ,0) 13/ 91 (0.0.1.1.0. 1.0) (0.0.1.1.0.1.0) 9/ fl (1.1.0.0.0JJ) (l,l,0,0,0,l,l) it/ 94 91 94 Id (0,0,1,0,0,1,1) (1.1.0.0, 1.0.0) 0 (0,0, 1,0,0, 1, 1) (1, 1 ,0,0, 1,0,0? 94 94 (0.0.1.0.1.0.0)°ld (0,0,1,0,1,0,0) \qs (0.0.1.1.o,1.0)°'d Figure 4.10: The coverability tree of the reduced model. Orative teleoperation system, while preserving properties of boundedness and liveness, t"he system model is reduced to the one shown in Figure 4.9 [83]. The reduction rules used for this purpose are shown in Figure 4.8. It is worth mentioning that the model cOuld have been further simplified but the one presented is simple enough to develOp a reasonable size coverability tree. The reduced system’s coverability tree is shown in Figure 4.10. From this tree it Can be deduced that the system is bounded and live. Moreover, this coverability tree conveys that the system is dead-lock free since all the markings in the tree have 57 enabled transitions [83]. Also, following a similar analysis to the one done for the general teleoperation system case, it can be deduced that this system is also event- synchronized. This link between Petri Net properties and event-synchronization is formally developed based on the definition given earlier as, Definition 3 An event-synchronized system is one in which all the signals in the system are always referencing the same event-based reference. For the control and the feedback signals no tolerance is allowed; therefore, they should have the same event-based reference at any instant. This is required so that the feedback being rendered is guaranteed to be a reflection of the most up—to-date state of the system. This definition translates into two requirements; first no two instances of the same signal can coexist anywhere in the network and second every feedback instant or sample should generate one and only one command response and vice versa. These two event-synchronization requirements can be translated into Petri Nets prOperties. First there is a need to ensure that the Petri Net will not dead-lock, which formally means the network has to be shown as “live” [83]. Second, it is required not to have two instances of the same signal at any point, that is no accumulation of Signals at any point within the system. This translates into having either one or no tOkens in any place in the net. For this to be true the Petri Net has to be “safe” [83]. This however does not guarantee not having instances of the same signal in Other places in the network. For this to be true and to satisfy the second condition of event-synchronization the Petri Net model has to be “fair” with a reproduction VeCtor :r, where :1: = [1, 1, ..., 1] [91]. A fair network with such a reproduction vector gnarantees that all transitions will fire equally many times, which implies that once a trarisition fires it cannot fire again until all the transitions in the network have fired. SO the system executes each step once and only once in each communication cycle, Which implies that one and only one control signal will be generated for each feedback 58 sample and vice versa. Formally these conditions and their implications can be stated 33, Theorem 4 A system is event-synchronous if and only if the Petri Net describing the system’s communication protocol is live, safe and fair with a reproduction vector :3an, where :1: = [1, 1, ...1] and n is the number of transitions in the Petri Net. Proof. Necessary. Assume a Petri Net is not safe => there exists at least one place that has more than one token at the same time, and since tokens represent signals in the system :> there exists two instances of the same signal at the same time in the system having different event-based references :> the system is not event-synchronous. Sufficient. Given that the network is live, then deadlock would not occur. In addition, if a Petri Net is safe implies that it is l-bounded, which means that the number of tokens in each place does not exceed 1 for any marking reachable from the initial marking, Mo [83]. For the communication this implies that there can only be one instant of a signal at any point in the system. However this does not guarantee that other instances do not exist at other points. This is accomplished by requiring the system to be fair with a reproduction vector 113mm, where a: = [1, 1, ...1] and n is the number of transitions in the Petri Net. A reproduction vector, x, is a minimal nonzero T-invariant, where T -invariant is an n-vector :1: _>_ 0 such that ATx = 0. A is the incidence matrix of a Petri Net, where A = [au] 18 an nxm matrix of 1ntegers and 1ts typical entry av 15 given by _ + — where of]. is the number of arcs from transition i to its input place j and of]. is the nlllrnber of arcs to transition i from its input place j [91]. An n-vector is a T-invariant if and only if there exists a marking Mo and a firing sequence a from Mo back to M0 59 with its firing count vector equal to a: [91]. This simply means that a reproduction vector corresponds to the minimum number of times each transition has to fire for the initial marking to recur. Since a fair network has at most one reproduction vector it implies that arm is unique. Given that x = [1, 1, ...1], then each transition has to happen once for the initial marking to recur. This also means that a transition cannot fire more than once before all the other transitions have fired because they are in a fair relation [91]. So a new instant of a signal will not be generated unless the earlier one has been processed. Therefore, this ensures the second requirement for event—synchronization. O The Petri Net design procedure that results in event-synchronous systems is de— veloped in the next chapter. 4.3 Comparison with Other Models and Approaches There are some common methods used in the literature to model and analyze tele- Operation systems, the most popular ones are networks, wave variables and dynamic equations. The modeling and analysis approach examined here is based on network modeling, which was used by Anderson in [4] [32]. Anderson bases his analysis on the fact that for the system to be stable it should be passive. Using the analogy between mechanical and electrical systems he represented a teleoperator as a network, as shown in Figure 4.11. In this model, the master, cOrnmunication block, and slave are represented by two-port, as for the operator and environment they are represented by one-port. An n-port is characterized by the relationship between effort F (force, voltage) and flow 1) (velocity, current). This 60 f\ A (N A 1 \f H K; 1| 3 l[ \f 1 Human Master Commun- Slave Environ- Operator F Fmd ication F5 Fe ment 1 n l I n I I A J l A l U V U \f Figure 4.11: Network representation of teleoperator. The different terms are as ex- plained in Table 4.1. V1 V2 —> —> | | + + F1 “ F2 Figure 4.12: Transformer. relationship for one-port is specified by its impedance Z (3) according to F(s) = Z(s)v(s) (4.27) where F (s), 12(3) are the Laplace transforms of F (t) and v(t). As for the two-port, the relationship is F1(S) : h11(8)h21(3) 111(8) : H(3) U1(S) (4 28) —’02(S) h12(8)h22(3) F2(S) F2(S) where H (s) is the hybrid matrix and F1, F2, 121 and 122 are as defined in Figure 4.12. For the system to be stable, its different components have to be passive. That is, they can dissipate energy but cannot increase the total energy of the system they are part of. This is why Anderson modeled all the components of the system using passive 61 circuit elements. Then to achieve a stable system under time delay, he choose a control law that would make the characteristics of the communication block identical to a two-port lossless transmission line, which is a passive element. The derived control law for the communication circuit is Fmd(t) = F.9(t - T) - vsd(t - T) + MW (429) ”ad = vm(t — T) — F3“) + Fmd(t — T) (4'30) where T is delay, and the other terms are explained in Table 4.1. Comparing this approach and model to the ones developed in this research, several differences are found. The stability analysis in this case relies on the specific model used, whereas the stability analysis done for event-based control systems is independent of the specific system developed. Moreover, the model assumes constant delay, which is also the same in both directions of the communication, whereas no assumptions regarding the delay are made in the analysis and modeling developed in this document. In addition, the human model is assumed to be passive, which is not the case for the research conducted. Similar limitations and differences apply to the other modeling and analysis techniques proposed in the literature. The next chapter presents the design procedures required to develop stable, event- transparent and event-synchronous real-time teleoperation systems. 62 CHAPTER 5 DESIGN OF EVENT-BASED REAL-TIME TELEOPERATION SYSTEMS For real—time teleoperation systems to become commercially attractive they have to be safe and efficient. In other words, some performance measures, such as stability, event-transparency and event-synchronization have to be guaranteed. This chapter gives the design procedures required to accomplish this guarantee. 5.1 Design of Stable and Event-Transparent Teleoperation Systems To achieve stability, the system has to be designed as an event-based control and planning system. That is the signals in the system have to be sampled with respect to an action reference other than time. Equivalently, this implies that the system components reference a non-time based action reference, referred to as event-based action reference. For these systems to be stable, the event-based reference has to be a non-decreasing function of time as discussed in Section 3.2. As for event-transparency, additional conditions have to be satisfied. Note that, being an event-based control system gives Fh(s) = aFe(s) and Vh(s) = 5143(3) (5.1) where E, and Vh are the supermedia and commands generated by the joystick re- 63 spectively. Similarly, Fe and V,, are the slave supermedia and control respectively and (1,6 and C are simply scaling factors, and s is the action reference. However, this condition is not enough for the system to be event-transparent. That requires the event-based reference, 3, to be chosen in a way to ensure that Fe(s),.d = Fe(s),,d, where Fe(3)rd and Fe(s)nd are the supermedia sensed by the system under random delay and no delay conditions respectively. Additionally, the event-based sampling period, S, has to satisfy the following condition discussed in Section 3.3: — > wl (5.2) where K is the scaling between event-based reference and time, and where the supermedia detected is hand limited by w. The major difficulties in selecting s are the many uncertainties in the system. The main uncertainties are the path (trajectory) and the environment. The specific choice of s will depend on the type of feedback being used, the work space size (macro or micro) and the performance requirements. Typically, the event 3 is taken to be a function of the physical output; for example, the distance to the origin, the angle or the absolute position. However, the event-based reference can also be a virtual one that does not corre- spond to any physical quantity. For example, it can be chosen to be the number of control cycles the system has performed. This can be achieved by using event-based planning and control, with the event taken as the command number. So if currently the operator is sending the nth command, then the event is n. This choice of the refer- ence is intuitive and does not require the operator to know what value it is, since the communication procedure will ensure the specific behavior of the system regardless of the operator’s commands. 64 To describe event-based control and planning from signals perspective, the com- munication of control and feedback signals is described here. This discussion applies to all event-based systems regardless of the specific reference used. The sequence starts by the operator placing the joystick in a certain position that corresponds to a command. This position can be translated into a position or velocity command. The behavior of the joystick is similar to the gas pedal in a car, moving away from the center would generate a larger value for the command. The command vector is then sent to the robot. This is the point where the choice of a specific reference would alter the behavior slightly. For the first case, where the reference is chosen to be the sequence number, once the command arrives it is immediately forwarded to the obstacle avoidance routine. If a command is not received for sometime the robot will time out and wait for a new command. As for the second case, where the reference is chosen to be a physical output, the command is not forwarded to the obstacle avoidance routine until the start of the next period. For example, if the reference was taken as the distance the robot traveled with a sampling period of two centimeters, then the new command will not be processed until the robot moves the two centimeters. If a command is not received before the robot had already moved the amount of the sampling period, the robot would be waiting for the new command and in this case the command is processed immediately. Once the obstacle avoidance routine receives the command it will take decisions based on the results obtained from the proximity sensing units. This decision might be either to execute the command as is, to reduce the value and execute or not to move at all. The modified command is then sent to the local low level controller of the robot or robots. Once this command is handed over to the controller the supermedia to be fed back is sensed and sent to the remote site. At the remote site each feedback type is given to the corresponding rendering device. For example, the joystick local controller 65 receives the force and the temperature rendering device receives the temperature. These devices render the supermedia and then the whole sequence is started again. In this Operation none of these step can be executed out of order, each event is triggered by the end of the previous one. What should be clear here is that during all this time, although the operator can move the joystick, the commands will not be sampled by the joystick until feedback is received and rendered. This ensures that every force the operator feels is the most recent one and that it corresponds to the current state of the robot. In addition, this procedure guarantees that the new command is the next one to be executed and no other commands are pending. The next section gives a formal design procedure based on Petri Net that results in event-synchronous systems. 5.2 Design of Event-Synchronous Teleoperation Systems Petri Net has proven to be a very beneficial tool for the modeling and analysis of different types of systems. Event-based control systems is one type where Petri Net can be used to model the communication and control logic. Both continuous and discrete aspects can be captured. More importantly, Petri Nets can be used to design and analyze different characteristics of the system. Event-synchronization is one of these performance measures that can be designed and tested using Petri Nets. To illustrate this, the following recursive design algorithm is proposed: H . Make system live 2. Make system safe 3. Make system fair with reproduction vector 23"“, where a: = [1, 1, ...1]. 4. Check if system is still live, if not go to step 1. 5. Check if system is still safe, if not go to step 2. 66 Server Figure 5.2: The equivalent reduced model of the system shown in Figure 5.1. To demonstrate this design method the following representative examples are con- sidered. The first example is that of a single operator single robot system and the second example is that of a multi-Operator multi-robot system. Figure 5.1 shows the Petri Net model of a teleoperation system that is not event-synchronous. From the model it is clear that commands and feedback are sampled and transmitted indepen- dently of each other. This model can be reduced to simplify the design and analysis to the model shown in Figure 5.2. It is clear that this system is not safe since transition q3 can fire infinitely many times before transition (14 fires even once and this will accumulate an infinite number 67 of tokens in place 195. To satisfy safety and part of event-synchronization places p2 and 19;, cannot have more than one token simultaneously. This condition can be written as #2 + #5 S 1 (5.3) The model shown in Figure 5.2 can be modified to satisfy the condition in eq.[5.3[ using Petri Net design procedures [92]-[94]. To modify the original net, referred to as the “process net”, it is required to design a “controller net”, which is made up of the process net’s transitions and a separate set of places. Each condition to be satisfied will require one additional place in the controlled net called the “slack”. The purpose of this slack is to receive the excess tokens from the places in the condition, thus ensuring that it is satisfied. The result is a “controlled system” or “controlled net” with a composite change matrix D made up of both the process net composite change matrix DP and the controller net composite change matrix Dc, where the composite change matrix is the transpose of the incidence matrix described earlier. So the composite change matrix of the process Petri Net shown in Figure 5.2 is 0 0 0 0 1 —1 0 0 0 0 0 0 DP = (5.4) 0 0 0 0 0 0 1 —1 [ 0 0 0 0 68 with initial marking up, 2 (5.5) It is required to control this net so that the condition in eq.[5.3] is satisfied. This condition or constraint is of the general form Lap 3 b (5.6) where up is the marking vector of the Petri net, L is an ncxn matrix, b is an ncxl Vector and n6 is the number of constraints of the type shown in eq.[5.3] [92]. The “slack” or controlled places mentioned earlier transform the inequality con- straints into equality conditions of the form Lap + pa = b (5.7) where He is an ncxl vector which represents the marking of the controller places. Therefore, the specific case shown in eq.[5.3] can be written as #2 + #5 + u. = 1 (5-8) 69 where in this case L =[010 010], bzland no : ,us. The constraint described by eq.[5.8] represents a place invariant and must satisfy the place invariant equation DP XT~D=[L I]. :04: (5.9) L-Dp+Dc=0<:> (5.10) D6 = —L . D, (5.11) where I is an ncxnc identity matrix [92]. In this case the controller net composite Change matrix is Dc 2 [—1 1 — 1 1]. In addition, the initial marking of the controller net can be calculated using eq.[5.7] and the initial marking vectors: L./.lp0 + [1.30 = 0 <2) ”c0 2 b — L./.tpo (5.12) which implies in this case that #30 = 1. This results in a controlled network as ShOWn in Figure 5.3 having the following composite change matrix and initial marking: 0 0 0 0 1 —1 0 0 0 0 0 0 DP Dc 0 0 1 —1 0 0 0 0 —1 1 -1 1 70 Figure 5.3: Petri Net model of the controlled net. (1,0,1,1,0,1,1) 91 93 (l.l,l,l,0,l.0) (l,o,l.l,l,l,l) 92 94 (1.0.1.1.0.1.1)°ld (1,0,1,1,o,1,1)°ld Figure 5.4: The coverability tree of the net shown in Figure 5.3. ”P0 #120 = Z I [1'50 (5.14) The dotted part of the net represents the “controller” net, which is required for the “controlled” net to satisfy the constraint of eq.[5.3]. Since this model is bounded then the coverability tree can be used to study its liveness and safety [83]. The coverability tree of the net shown is Figure 5.3 is given 71 Client Server Figure 5.5: The modified net required to satisfy fairness. in Figure 5.4. From this coverability tree it is clear that the system is live since there are no deadlocked markings and that the system is safe since all the markings have only zeros and ones in them [83]. To check if the system is fair with reproduction vector mm, where :r 2 [1, 1, ...1], the necessary condition used is that for a net to be fair then if :1: is an n-vector of nOn-negative integers such that ATx Z 0 and :1: 95 0, then every entry of :15 must be positive, where A is the incidence matrix. To test this, the incidence matrix of the SYStem shown in Figure 5.3 is generated: 0100 0 0—1 0—100001 A: (5.15) 0 0 0010—1 0000—101 It is easy to see that x = [1 l 0 0]T and :1: = [0 0 1 1]T result in ATx = 0, so ~12 2 [1 1 l 1]T is not a reproduction vector. Also, not every entry of these vectors is positive so the net is not even fair. If the net was fair with a reproduction vector the 72 (1.0.1.1.0.l.1,0) 9| (l,l,1,l.0.1.0.0) qg/ (l.0.l.1.0.1.0.l) \93 (l,0,l.1.l,1.0.0) \i old (l,0.l.1,0.l,1.0) Figure 5.6: The coverability tree of the net shown in Figure 5.5. incidence matrix A has to have a rank of n — 1, where n is the number of transitions. This is not the case for this incidence matrix, which has a rank of 2. In order to satisfy all these conditions and render the net fair it is modified to the net shown in Figure 5.5. The change proposed is to split place 7 into two places and thus increase the dimension of A. To study the fairness of the system shown in Figure 5.5 its incidence matrix is Constructed: 010000—10l 0—100000 1 A: (5.16) 0 0 00 1 0 0 —1 0000—1010 From this incidence matrix it is possible to solve for the reproduction vector T a: 2 [3:1 2:2 x3 1134] using 24%: = 0. This results in the following system of equations: x1—$2:x3—$4=—:r1+x4=1:2—173=0 (5.17) #2312162213321'4 73 Client 1 Server 1 Server 2 0P1 ([9]-1-----.1313: £9§€Effllnuyfiqm Chem E I": ......m ‘ "2:"... 4_ : r-‘ in o e 1 — 8:3? Send [ :Buffer Fun I; E[Receive Buffer P ql P-S‘a‘e ' ii 513”“ Full 9 O 1 "150 0-, . 19 Smh‘u': 0:1;State . ‘1 . I :7. P2 Sent 1 :1 Received P10 95 R tate —n(ilState& P16 P15 eceilve EXQCuteCo | ExecuteCOm ... 0 011,01)” -I Buffer P4 P A -I Buffer 1 1 94 .1 O E q7 Full 0 P5 (13 P13 (18— Figure 5.7: The Petri Net model at different design stages for an event-synchronous multi—operator multi-robot teleoperation system. To check for fairness :1: is considered as an n-vector of non-negative integers with IE ~75 0, which implies that $1 = x2 = $3 = $4 = k, where k is a positive integer. So every entry of a: must be positive and since the network is bounded for any initial marking then the net is fair [91]. The reproduction vector is the minimal vector Satisfying A5: = 0 so for this system k = 1 and x = [1 1 1 1]T. To further check, the rank of A is n — 1 = 3 and the reproduction vector is unique which are two properties of fair nets [91]. Checking liveness and safety properties is done using the coverability tree, which is Shown in Figure 5.6. Since all the markings contain only ones and zeros then the net, is safe, and since no marking is labeled “dead” then the net is live. These steps illustrated using a Petri Net design procedure to obtain an event- SB’Iichronous single robot single operator teleoperation system. This system can be fllrther developed to include multi-operators and multi-robots. The model of such a system at different design stages is shown in Figure 5.7. The solid line parts model the original stand alone system, the dotted parts, which include transitions 9 and 10 and places 17, 18 and 19, are added at the first design 74 l, l,0,0.0 91 ( ) (0,110.10) 92 (0.0.0 0.1.1) (0,0,1 ,l,l,0) 94 (1.0.1.0.o.0)°1d Figure 5.8: The resultant net from reducing the net shown in Figure 5.7 and its coverability tree. stage and the bold dotted parts, which include place 20, are added at the second and final design stage. Obviously the two stand alone systems are not event-synchronous since transi- tions in each fire irrespective of the other system’s state and thus they are not fair. Therefore, there is a need for the two servers to communicate in order to convey state information. For this purpose transitions 9 and 10 and places 17, 18 and 19 are added to the net in order to send state information from one robot to the other. However, this is not sufficient because server 1 can still execute more times than server 2 in Which case place 17 would have more than one token. This implies that the system is Still not fair and even not safe. For this to be remedied server 2 is also required 130 send its state information, which is done by adding place 20. These additions Compose a two-way handshake between the robots, which guarantees liveness, safety and fairness and thus event-synchronization. To illustrate this the controlled net in Figure 5.7 is analyzed. First it is reduced t10 the net shown in Figure 5.8 with the coverability tree shown in the same figure. Since the coverability tree has no “dead” states and since it has only ones ad zeros then the net is live and safe. To study the fairness of the system shown in Figure 5.7 its incidence matrix is Constructed: 75 - 0 1 O 0 O 0 —1 0 O 0 0 0 0 0 O 0 0 1 O 0 0 0 0 O 1 0 0 —1 0 0 0 0 O 0 —1 0 1 0 0 0 A = O O O 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 O 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 -1 O 0 0 0 0 O O 0 _ 0 0 0 0 0 O 0 0 0 —1 (5.18) 0 0 0 0 O O 0 0 0 0 - 0 0 0 0 0 O 0 —-1 0 —1 0 0 O 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 —1 0 0 0 0 0 O 0 0 0 0 1 0 0 —1 1 0 0 l 0 0 —1 0 0 0 0 0 0 —1 0 1 0 0 0 0 0 0 0 0 0 0 O 1 1 0 0 0 0 0 0 0 0 —1 0 1 0 _ From this incidence matrix it is possible to solve for the reproduction vector T ‘13 = [2:1 2:2 2:3 2:4 $5 .735 2:7 178 239 $10] USing ATx = 0. This results in the following System of equations: 76 $1—$9:$3—$4=—$1+$4=$2—$3=0 115—15102167—568=-$5+$3:$6—$7=0 $9—310:“$2+$9=-$6+$1o=—$2+$5=0 (5-19) =>$1=$22$3=$4=$5=$6 =$7=$8=$9=$10 To check for fairness $ is considered as an n-vector of non-negative integers with a: 7A 0, which implies that $1: $2 = $3 = $4 = $5 = $5 = $7 = $8 = $9 = $10 = k, where k is a positive integer. So every entry of it must be positive and since the network is bounded for any initial marking then the net is fair [91]. The reproduction vector is the minimal vector satisfying AT$ = 0; so for this system, k = l and $ = [1 1 1 1 1 1 1 1 l 1]T. Therefore, the controlled system is safe, live and fair, which implies it is event-synchronous. The design procedures discussed in this chapter constitute a general methodology for implementing real-time event-based teleoperation systems. This framework ap— plies to any real-time teleoperation system regardless of the robot, the environment model, the human operator and the specific network used. Importantly, based on this design the system stability, event-transparency and event-synchronization are guaranteed regardless of the delay encountered. The importance of these properties is demonstrated in the next chapter, which de- rives the conditions required to achieve safe and reliable multi-robot tele—coordination. The concepts of coordination for multiple robots and coordination index, which is a quantitative measure of coordination quality, are introduced. In addition, it is proved that event-transparency and event-synchronization are the requirements to achieve any level of tele-coordination feasible under no delay conditions. 77 CHAPTER 6 MULTI-ROBOT TELE—COORDINATION Cooperating robots have been the topic of several research efforts ranging from au- tonomous manipulation to remote control [95]-[105]. In many scenarios, cooperation offers additional safety, efficiency and feasibility. For instance, an object might be too heavy or too fragile to be lifted by one robot only. Combined with the Inter- net, tele-cooperation provides the means for multi-experts at multi-sites to cooperate using multi-robots. However, not all multi-robot cooperation systems, such as the one shown in Figure 6.1, are considered as coordinating systems. Coordination refers more specifically to the case where two robots are in contact with the same object or with each other. This mechanical contact results in close coupling between the robots and thus requires higher precision, stricter synchronization and a higher level of coordination. Coordinating robots and more specifically tele—coordinating robots, where the robots are not autonomous but teleoperated, offer numerous technical difficulties. These difficulties relate to the synchronization of operation, coordination strategies and constraints on the manipulated object. When it comes to tele-coordination, especially Internet-based, there are several additional challenges facing the achievement of synchronous operation, efficient co- ordination and safe manipulation. These challenges result from the randomness of time delay encountered over the Internet and the uncertainties in the environment, the object manipulated and the specific task path / trajectory. Traditionally multi-manipulator coordination can be divided into centralized con- trol [95] [96], where a single controller is supposed to control all the manipulators, and decentralized control [97] [98], where each robot is controlled by its own con- troller. The decentralized approach overcomes the computational burden imposed on the central controller. Multi-robot tele—coordination can be categorized similarly with 78 Hong Kong Figure 6.1: The general structure of a multi-robot tele-coordination system. some approaches being a combination of both. Tele-cooperation research found in the literature shares some or all of the fol- lowing limitations; non-deterministic time delay is not considered, the control is not bilateral, the robots are not coordinating, there is no objective measure of the coor- dination, physically separated operators are not considered [99]-[102]. To overcome the challenges faced and the limitations found in the literature event-based planning and control is proposed for multi-robot coordination systems. 6.1 Modeling and Analysis of Tele—coordination Systems First a measure of the tele—coordination level is established. In autonomous coordina- tion, force has been the main measure of coordination. This has not been true for the Case of tele-coordination. Tele-coordinating with multiple robots under certain force Conditions is more difficult because of the uncertainties in the task and environment. Considering multiple robots that are teleoperated to coordinate in moving an object, the smaller the internal force that is felt by the manipulated object the better the Coordination is. This translates into having the least opposition possible between the 1‘obots. To relate the internal force and the opposition between robots, consider the Case of an object manipulated by two robots as seen in Figure 6.2 [103] [104]. The grasp is assumed to be rigid and the applied forces, f1 and f2 6 323, are represented 79 Figure 6.2: Two mobile manipulators moving an object. in a coordinate system fixed in the object frame and whose x-axis is along the object. Let 612 be the unit vector along the link from grasp 1 to grasp 2. Let f be the 6x1 force vector f1 f2 K... II (6.1) When the object is in static equilibrium the internal force, t, which is along the link between the grasps, and the applied forces, f, are related by f = Et where E = [—elz em] = [—1 0 0 1 0 0]T. When the object is not in static equilibrium the applied forces can be decomposed into two parts. A part which rosults in motion, fe, and a part which results in tensions or internal forces, f,,,. Therefore, f: fin+fe (6.2) 80 where f,n = Et. Solving for t results in tsz-h) $3 where E is a left inverse [103]: E = (ETE)‘1ET (6.4) For the two robot case E = %[—1 0 0 1 0 0]. By definition fe does not result in any tensions so Efe = 0 and the internal forces are given by tsz (63 The general case of multiple robots is analyzed similarly and is discussed in [103]. From eq.[6.5] it is clear that t = %(—$1 + $2) (6.6) where $1 and $2 are the $ components of f1 and f2 respectively. From eq.[6.6] it Can be inferred that Ht“ 5 g(ma$($1)+ ma$($2)) (6.7) For coordination it is required to have a certain limit on the object’s internal force, Which can be achieved by limiting $1 and $2 as seen in eq.[6.7]. Since ma$($1) = Tna$(||f1[|) and ma$($2) = ma$(| | f2”) then this limit can be accomplished by limiting the applied forces. In addition, since the magnitude of a vector in any frame is the 81 same then the applied forces can be limited in each robot’s hand frame. However, in order not to limit the acceleration, only fin need to be limited and not the total f. Therefore, the maximum external force each robot senses f,,,, not including force due to object weight and acceleration, can be considered as an objective measure of the coordination quality or level. The smaller the external force sensed the better the coordination, which in the ideal case translates into zero forces being detected. However, this ideal case even in autonomous operation is not feasible because of disturbances and errors. An additional intuitive condition is that the task should be accomplished, because there is no value in considering coordination quality if the task is not accomplished. This concept can be formally stated as, Definition 4 Coordination: n robots are coordinating with tolerance 15 if ma$([fe,(s)[) S e and the task is ac- complished, where s is the action reference, 6 E {$t+,0} is called the coordina- tion inde$ and f¢,(s) 6 R3 is the e$ternal force sensed by robot i, (i = 1,2,...,n). fei (s) is given by fe,(s) = E-(s) — m,a,-(s) for $ and y components and by f¢,(s) :- F.‘ (s) — m,a,-(s) — mg for 2 component. F,(s) E R3 is the total force detected by robot 7:: Tn,- is the mass carried by robot i, a,(s) 6 323 is the acceleration of roboti and g is the gravitational constant. If e = 0 the robots are said to be e$ecuting ideal coordination. Therefore, to establish a certain level of coordination while guaranteeing task Completion is a matter of limiting the force each robot is subjected to. This is similar tlo limiting the force each robot can execute. The force limited is the external force exerted on the robot excluding the weight of the object manipulated and the inertial forces. Note that it is assumed the object is held by the robots with rigid grasp. This implies that there is no relative motion between the robots’ grippers and the object. 82 Also the weight carried by each robot is assumed to be constant and no weight redistribution occurs. This is not a limiting condition for objects with uniformly distributed mass. As for objects that have non-uniformly distributed mass, a real- time re-calibration algorithm should be devised based on initial weight carried, initial posture, and changes in the posture. To accomplish any specified coordination index feasible under no delay conditions the robots have to satisfy certain specifications as described in the next theorem. Theorem 5 If n robots are event-transparent and event-synchronous then they can be teleoperated under random delay conditions to coordinate to any tolerance 15, which is feasible under no delay conditions, where c is the required coordination inde$. Proof. Let V(s) be the path of the manipulated object required to accomplish the task. Corresponding to this path, under no delay conditions, each robot will have a path referred to as Vnd,-(s). Vnd,(s) is the path of robot i required to accomplish the task under no delay tele—coordination conditions, with coordination index end, => ma$(|fnde,(8)l) S find where fndei(5) is the external force sensed by robot i under no delay condition and given by Fnd$i(3) — madam-1(8) fndei(3) : Fndyi(3) — miandy,(s) (68) Fndzi(3) “ miandzi(3) _ mig where Fuck-(s) is the total force detected by robot i under no delay conditions, m.- iS the mass carried by robot i, cum-(s) is the acceleration of robot 2' under no delay Conditions and g is the gravitational constant. Since the robots are event-transparent [106], 83 => frde1'(3) = fndei(3) where frde,(s) is the external force sensed by robot i under random delay condition and given by Frd$i(5) — miard$i(3) frdei(3) : Frdyi (S) - miardy,(s) (69) Frdzi'(3) - miardzi(3) — mig b where F,d,-(s) is the total force detected by robot i under random delay conditions, m; is the mass carried by robot i, ard,(s) is the acceleration of robot i under random delay conditions and g is the gravitational constant. => ma$(lfrdei(3)|) = ma$(lfndei(3)|) S 5nd (6-10) Also from even-transparency it implies that 14,1,(s) = Vnd,(s) [106], where Vrd,(s) is the path of robot i under nondeterministic delay tele-coordination conditions. Since the system is event-synchronous it implies that all the robots are using the same ref- er ence s and thus all Vrd,(s) are executed synchronously with respect to this reference. Therefore, the i robot paths V,.d,-(s) (i=1, ...,n) will result in the same object path, V(S). So the task is accomplished with a coordination index deduced from eq.[6.10] and equal to end. 0 C(31'0llary 6.1.1 11 event-transparent and event-synchronous robots can ideally coor- dinate (e = 0). 84 This is possible as long as ideal coordination is feasible, which might not be realistic in most scenarios. Even if humans are coordinating to move an object ideal coordination is very diflicult to achieve because this will result in indefinite waiting. One of the humans has to move initially and since perfect synchronization is not possible this will cause some force on the other humans hand. The control strategy required to achieve coordination with a certain tolerance level is the topic of the next section. 6.2 Control of Tele-coordination Systems A control strategy is required to attain coordination with a certain tolerance. This is accomplished for small tolerance by the following set of control laws: 0 For all i, stop moving if ma$(|fe,-(s)|) Z 6. e In case ma$(|fe,(s)|) 2 e for any i, then for all i, if V,(s) is in the direction of reducing fe, execute it else discard it. o Execute V,(s) only when all V,(s) are received for all i. The first law will guarantee that the robot will not cause its force to exceed the required coordination index. This is sufficient in case two robots only are coordinating because in this case what one robot is detecting is just the same as the other but in the opposite direction. However, in the case of more than two robots, other robots have to take into consideration the forces sensed by all the robots coordinating because in this case it is possible for only one robot to reach the coordination index but not for the others. This is accomplished using the second law, which requires all the robots to operate only in the decreasing force direction even if one robot has reached the limit. For this to be accomplished each robot has to communicate its force sensed to the other robots or at least inform the other robots that it has reached its force 85 limit. This way the other robots would not execute new commands unless they are in the decreasing direction of force, which can be simply calculated using a dot product between the force sensed and the command desired. Explicitly communicating force information is not required in the two robots case, where it is sufficient for each robot to base its movement decisions on its own state. These two laws will guarantee that max(lfndei(3) _ frdei(3)l) S 26 Since 6 is considered to be relatively small, : frdei(8) _‘) fndei(8) Based on event-transparency => Vrd1'(s) -—> Vnd,(s). This implies that each robot executes similar paths with respect to the event- based reference under no delay and random delay conditions. However, in order for the object to follow its desired path too the robots have to be event-synchronized. This is accomplished by the third law, which requires all the robots to utilize the same reference. This control strategy will result in the operator controlling the robot as long as the force detected is less than the coordination index. Once this limit is reached by one of the robots they would allow only movement that decreases this force. The next chapter deals with the implementation and experimentation. Implemen- tation details of several developed systems are given. Also the experimental results and their implications are discussed. 86 CHAPTER 7 IMPLEMENTATION AND EXPERIMENTATION To illustrate the generality of the design methodology and the theory developed, several different teleoperation systems were implemented and tested. This chapter presents the implementation details of these systems along with their experimental results. Most of these systems share common hardware and software architecture and specifications, which are discussed in Section 7 .1 and Section 7.2. Section 7.3 gives the specification and performance of the Internet connections utilized during the dif- ferent experiments. The sections following these discuss the specific systems and their experimental results. Section 7 .4 discusses the teleoperation of a mobile robot giving the implementation details and several experimental results. Section 7.5 and Section 7.6 present a mobile manipulator teleoperation system using velocity and po- sition control respectively. Section 7.7 and section 7.8 cover the implementation and experimentation of multi-site multi-Operator tele-c00peration and tele-coordination systems respectively. Section 7.9 gives the implementation and experimentation of a tele—autonomous robotic formation system. Section 7.10 details a supermedia en- hanced teleoperation system with its experimental results. Section 7.11 illustrates the generality of the developed approach by describing the implementation and ex- perimentation of a micro manipulator teleoperation system. 7.1 General Hardware Architecture and Specifications The diversity of the hardware used forced a careful study of integration and inter- facing. As seen in Figure 7.1, which shows the general hardware architecture, and Table 7.1, which gives the specifications of the hardware components, the system consists of different operating systems and different configurations. So the problem of interconnection had to be studied carefully. 87 MS Force Feegback Joystick Rendering Device Via Acquisition card C: Jr . .1 Wireless Ethernet % Phantom Haptic Via R3232? Device Via RSZByEmemetlAcquisition sewse) Laser Sonar Infrared Cameras Temperature Rendering Device Figure 7.1: The hardware architecture shared by most of the systems developed. Device Specification Joysticks Microsoft SideWinder Force Feedback Pro. 3 degrees of freedom (dof) position, 2 dof force rendering Haptic devices Phantom 1.5 and Phantom Desktop 6 dof position, 3 dof force rendering Local PCs Pentiums WIN98, WINXP, WINNT, WIN 2000 Interfaces to the joystick via USB port, to the haptic devices via acquisition cards, and to the temperature rendering device via the serial port. Access Point Proxim RangeLan2 radio access point Connects the robots to the Internet Mobile Robots Nomadic XR4000, 2 Pentium PCs on board Linux and QN X Interface to the Internet via wireless ethernet Equipped with multiple types of sensors: Infrared, Ultra sonic, Laser, Bumper and contact/non-contact temperature sensors. Manipulators PUMA 560 Equipped with force/ torque sensors Cameras Sony EVI-D30 cameras and X10 wireless camera Temperature Sensing and Rendering System Designed and implemented in house Table 7 .1: Specifications of most of the hardware used. 88 The hardware used in most of the experiments included all or part of the following: o Joysticks: Programmable force feedback joysticks, which are used to obtain commands and render force. The joysticks connect to the local PC, which is running WIN98 or WINXP operating systems, via the USB or the game port. 0 Haptic Devices: Phantom devices, which are used to obtain position commands and render force. These devices interface to the local PC, which is running WIN NT or WIN2000, via acquisition cards. 0 Local PCs: Used to interface to the joysticks, haptic devices and temperature rendering device and used to control them. Also used to communicate over the Internet with the robots. In addition, they are used to display real-time video streams. 0 Access point: Creates the link between the Internet and the robots via wireless communication. 0 Mobile Robots: The XR4000 mobile robots used are equipped with several sensors (infrared, ultrasonic, laser and bumper), which are used to detect and avoid obstacles, and they are equipped with contact and non-contact temper- ature sensors. The robots execute commands depending on the surrounding environment and based on an obstacle avoidance algorithm discussed later in this chapter. 0 Manipulators: The PUMA560 manipulators used are equipped with 6 dof force / torque sensors. 0 Cameras: The robots have a camera mounted on-board to allow visual feedback to the operator via the Internet. Also other cameras are mounted overhead to provide a global view of the environment. These cameras can be controlled with the joystick to alter pan, tilt and zoom. 89 [Server 2 a Controller * l—i’] 2V Power [3—1 Supply Temperature Temperature Sensor Sensor ..... -- Figure 7.2: The remote temperature sensing and rendering system architecture. _[ Comm “"9"! 12v Power [ ‘ Supply 4- R8232 Solid State l 1-1 Relay 1 WM _.[: 1——— 51”....2'": + I Solid State Dc" Tcmvemm Relay 2 51‘ Sensor l 12V Power Cooling Supply Figure 7.3: The architecture of the temperature rendering device developed. 0 Temperature Sensing and Rendering System: The remote temperature sensing and rendering system architecture is shown in Figure 7.2. The temperature is sensed in real-time using contact or infrared non-contact temperature sensors. These sensors interface to the robot via the Iserver, which is an internet embed- ded device. The robot then relays the temperature information to the rendering device via the Internet. A temperature rendering device was developed to integrate within the Internet- based teleoperation system. This device, which has the architecture shown in Figure 7.3, is based on thermoelectric technology. Thermoelectric coolers/heaters are solid state heat pumps that operate on the Peltier effect. The Peltier effect states that there 90 is a heating or cooling effect when electric current passes through two conductors. A voltage applied to the free ends of two dissimilar materials creates a temperature difference. Therefore, the temperature on one end can be controlled by controlling the voltage and its polarity across the device. The controller interfaces with the computer via the serial port and controls the voltage across the device. The polarity is controlled based on whether cooling or heating is required and is done by switching between two circuits. A contact sensor is mounted on the device in order to feedback the actual temperature and attain closed loop control. 7 .2 General Software Architecture and Specifications This section covers the general software architecture, which is shown in Figure 7.4, detailing the specifications of most of the routines developed, which were programmed using C++ and Microsoft Visual C++. In addition, the obstacle avoidance routine, which was used in most of the developed systems, is examined and the virtual force generation algorithm is presented. The main difficulty in the software development was to have different Operating systems (Linux, QNX, WIN98, WINNT, WIN2000, WINXP) communicate with each other. Here came the role of the communication protocol TCP/IP, which is supported by most operating systems. So the Internet was not only acting as a communication link but also as a translator. In Internet communication usually there are two types of processes, servers and clients. Servers, as the name suggests, provide a service to clients. Typically, servers would run on the robot and clients would run on the remote machines. In the imple- mentations done, TCP was used for communication between the clients and servers and UDP was used for video transmission. TCP had to be used to guarantee the arrival of packets since TCP handles retransmission of lost packets. If this was not the case, then lost packets will result in a deadlock, where either the client is waiting 91 Via R5232 as Figure 7.4: The software architecture shared by most of the systems developed. Sonar Infrared Cameras Temperature , for the server or vise versa because of the synchronization between them. This is han- dled by TCP at a lower level than the application and is transparent to the operator. Also, haptic and temperature information does not require a large bandwidth and while TCP is acknowledging the reception of a packet the processes can go on with the execution; therefore, the overall performance of the system is not affected. In case TCP is not available on a certain operating system this will have to be handled at the application level. The following describes the specifications, operation and interaction of the servers, clients and general routines developed and shown in Figure 7.4. Note that this is a general architecture and was altered to fit the specific implementations. Therefore, some of these routines were not used in all systems or they might have been combined in one process or divided into several. These routines and their specifications are: 0 Data Acquisition Routines: These routines run on the PCs controlling the mo- bile and the manipulator. They are used to acquire data from the force/ torque sensors, laser, sonar, infrared and temperature sensors. This data is then used by other routines for feedback and closed loop control. These routines interface to the different sensors via acquisition cards, serial ports or ethernet cards. 92 The following routines run on the PC controlling the manipulator: PUMA Motion Planner and Controller: This controls the arm based on either velocity or position commands depending on the system used. Manipulator Server: This server receives commands from a client. In addition, it forwards commands to the PUMA motion planner and controller and forwards commands to the mobile server in case the mobile manipulator is used. The communication with the clients and mobile server are done using TCP/1P. This server also feeds back force information to the clients. The following routines run on the PC controlling the mobile: Mobile/Temperature/Camera Server: This server receives commands either from a client or from the manipulator server using TCP/1P. Depending on the system, this server either controls the camera or forwards commands to the mobile motion planner and controller. It also feeds back force and temperature information. Mobile Motion Planner and Controller: This controls the mobile based on either velocity or position commands depending on the system considered. Obstacle Avoidance: This routine runs on the mobile and will be discussed in details in the following subsection. Video Server/ Video Conferencing Tool (VIC): The video server tags each frame with the event reference to be used by the video client for event-synchronization. Then these frames are sent to the client via UDP using Video Conferencing Tool (VIC). This video conferencing application was developed by the Network Research Group at the Lawrence Berkeley National Laboratory in collaboration with the University of California, Berkeley. VIC was used to transmit video feedback from the on—board and the overhead camera to the operator. 93 The following routines run on the remote PCS connected to the joystick and to the temperature rendering device: Video Client/Video Conferencing Tool (VIC): This client is used to event- synchronize frames, which as received by VIC, with force and temperature. Mobile/Manipulator/Camera/Temperature Client: This client is used to in- terface with the MS joystick. It can be used to command mobile or mobile manipulator velocity or to command the camera. It communicates with the Mobile/Camera server using TCP/1P. It also receives the force and temper- ature from the servers and forwards them to the joystick controller and the temperature device controller respectively. Joystick Controller: This is responsible for controlling the joystick and rendering the force received. Temperature Device Controller: This is responsible for controlling the temper- ature on the rendering device via RS232. Manipulator and Temperature Client: This client is used to interface with the Phantom haptic device. It is used to command the manipulator position and receive force and temperature feedback using TCP/1P. Phantom Controller: This is responsible for controlling the phantom and ren- dering the force received. The different routines running on the same machine communicate via shared memory. Note that several of these routines are in practice either combined or split into different processes. In addition, not all of them are used in all the systems developed. 94 Figure 7.5: The XR4000 mobile robot. Obstacle Avoidance and Virtual Force Generation Obstacle avoidance is an important safety feature in teleoperation systems. Because of delay, teleoperated robots need to have a certain level of intelligence. Since what the operator sees is not the most recent robot state, collisions might occur. These collisions can be avoided by employing an obstacle avoidance algorithm on the robot. A detailed explanation of the sensors available and their distribution on the robot is given. Then their use for obstacle avoidance and virtual force generation is discussed. As seen is Figure 7.5, the XR4000 is a barrel shaped mobile robot. It is equipped with four types of sensors: 0 Tactile Sensors (Bumper): They provide information about the physical contact with the environment. The Nomad XR4000 has 48 bi-level tactile sensors that surround its top and bottom perimeters. Additionally, it has 4 door bumpers on each door that sense contact between the top and the bottom perimeters. The sensors are divided into 6 sets, 2 per each of the 3 doors. Figure 7.6 gives the distribution and numbering of these sensors [107]. Infrared Proximity Sensors: Give a range information to nearby objects (typi- cally less than 30 to 50 cm away). The range is determined by emitting infrared 95 Set 0=top set on front door (sensor#0-7) Set 1=bottom set on front door (sensor#0-7) Set 2=top set on left door (sensor#0-7) Set 3=bottom set on left door (sensor#0-7) Set 4=top set on right door (sensor#0-7) Set 5=bottom set on right door (sensor#0—7) Set 2,3 Figure 7.6: A top view of the robot that gives the distribution and numbering of the different sensors. energy using high-current LEDs and sensing the amount of returned energy with infrared photodiodes. Note that the returned energy is a function of the object’s reflectivity, that is why these sensors perform poorly in distance mea- surements. The robot has 48 of these sensors distributed just like the tactile sensors in 6 sets, and they have the same numbering shown in Figure 7.6 [107]. o Sonar Proximity Sensors: These provide range information to objects that are relatively far away (between 15 and 700 cm away). The distance is calculated by multiplying the speed of sound by the time of flight of a short ultrasonic pulse traveling to and from a nearby object. The robot has 48 of these sensors distributed just like the others, in 6 sets, and they have the same numbering shown in Figure 7.6 [107]. 0 Laser Proximity Sensor: This is a Sensus 550 “time of flight” laser range finding system. It provides 180 degrees of distance information in a plane at 0.5 degree increments with a scanning rate of 20H 2. This sensor is located inside the robot and fires from an opening in the front door. Obstacle avoidance is accomplished according to the flow chart in Figure 7.7. First, all the bumper sensors are checked, which gives 360 degrees coverage around the robot. If any is hit, the motion is stopped and a flag is set and sent back to the 96 No Infrared Close No Check Set velocity to Bumper than dc? Ultra-sonic L. (Vm-f(d)V,,,) & Laser Yes , Yes Stop motion Stop motion Set Flag Figure 7.7: Flow chart for deciding velocity based on sensors. F(d)‘ f 0.5 l (1 Figure 7.8: The fraction of velocity that is deducted based on the distance d in meters to the closest obstacle. joystick, where a sudden jerking force is played to convey that a bump has occurred. If none is hit, the infrared sensors, 90 degrees from both sides of the desired motion, are checked. If any detects an object closer than dc, which is a predefined programmable critical distance, the motion is stopped. If neither case is true, the ultra-sonic sensors are checked along with the laser. Again, 90 degrees from both sides of the desired velocity are checked. Based on the one that gives the closest distance, d, the velocity is set to V,, which is given in eq.(4.6), where V,n : f (d)Vm. So the velocity set would be a fraction of the desired one based on the predefined programmable function f (d). f (d) can be any function of distance depending on the safety requirements. In most implementations f (d) is as shown in Figure 7.8. According to this specific function, the robot will slow down linearly when an object is detected between 0.5 meter and 1 meter, then it would stop if an object is detected closer than 0.5 meter. As seen in Figure 7.9, two regions will exist around the robot, one that would cause the robot to slow down and another to completely 97 Figure 7.9: The two safety regions around the robot according to the direction of motion. stop. In case virtual force is desired, it is generated by calculating the difference between the actual velocity of the robot and the desired one. This difference is fed back to the joystick where it is rendered. So the higher the tracking error the higher the force is, which enables the operator to deduce the distance to obstacles in the direction of motion. This gives the benefit of sensing the obstacle without contact and before the visual feedback conveys it since the force feedback is faster than visual feedback. However, not all the systems developed included this virtual force since actual force was used. 7.3 Specification and Performance of Internet Connections Concerning the networks that were used for experimentation is this project, there were three main communication setups: 1) a station in the Robotics and Automation laboratory at Michigan State University was used to control the robot, 2) a machine in the Robot Control laboratory, at the Department of Automation and Computer- Aided Engineering at the Chinese University of Hong Kong‘, was used, 3) a machine in the Center for Cooperative Research in Advanced Science and Technology at Nagoya 1distance between East Lansing, where the robot was, and Hong Kong is about 7824 miles 98 2! CA 1 l L "o s 10 15 20 25 so as 40 Figure 7.10: Round trip delay in milliseconds for packets transmitted between the robot and the MSU station. 3‘0 I I I I T SN- .. 320- .. E .5 £300- a 5% 3290- q 280- 1 270 1 2w 1 l l l 0 20 40 so so 100 120 Pnatotmnber Figure 7.11: Round trip delay in milliseconds for packets transmitted between the robot and the Hong Kong station. Universityz, was used. The robot uses wireless ethernet, which has a transmission rate of 1.6M bps. The remote machines either use 10Base-T ethernet, which has a transmission speed of 2distance between East Lansing, where the robot was, and Nagoya, Japan is about 6464 miles 99 Figure 7.12: Round trip delay in milliseconds for packets transmitted between the robot and the Japan station. n: a Frequency of feedback per second 1 1 1 1 1 1 200 400 600 800 1000 1200 1400 demeoomecfiminm Figure 7.13: The frequency of control and feedback signals between the robot and the Hong Kong station. 10M bps, or they use ISDN which ranges from 56K bps to 128K bps. The round trip delay between the robot and the teleoperating stations at MSU, Hong Kong and Japan is plotted in Figure 7.10, Figure 7.11 and Figure 7.12, respec- tively. As for Figure 7.13, it shows the frequency of operation; that is, the number of times per second that the robot receives a command and the operator feels force 100 flammpflmoinme s s 2 I I Y 1 L 5 200 240 M m l l L L l l I l 1 0 20 40 60 so 100 120 140 160 1” 200 Packet W Figure 7 .14: Round trip delay in milliseconds for packets transmitted between the robot and the Hong Kong station. feedback during actual real-time control. From these plots it is clear that the delay is random and cannot be easily predicted; therefore, any assumption made about the time delay will be substantially limiting. Figure 7.11 and Figure 7.14 give plots of the round trip delay in milliseconds during different days. From these figures note that delays change according to several factors: time of day, what day and what time of the year the experiment is being done. Meaning that these delays cannot be predicted, which makes the real time control over the Internet significantly more difficult. 7 .4 Mobile Robot The general form of a mobile teleoperation system is seen in Figure 7.15. In this system the operator sends velocity commands and the remote machine feeds back force and video information. This section presents the implementation of an event- based teleoperation system for a mobile robot. In addition, the experimental results of several scenarios are discussed. 101 Video Video I N ‘—' Commands T Comrmnds E R N E Force T Force 4-— *— —> Velocity Velocity Figure 7.15: The general structure of the mobile teleoperation system developed. 7.4.1 System Implementation Implementation is divide into two parts: hardware and software. The hardware sec- tion details the different components used, their specifications and inter-connections. As for the software section, the different modules and their interactions will be ex- amined. Hardware The hardware architecture of this system is shown in Figure 7.16. The joystick used is the MS SideWinder Force Feedback Pro. and the mobile robot is a Nomadic XR4000. The components used are similar to the ones discussed in the general case. 102 1 I 1 I l Processing Unit [ L_—— ~o I,I . , . 12 V .V' IV], ‘2 I] Sensory Unit Motors On Board . Camera , j. MObileRoBo; Display Global View Camera Figure 7.16: The hardware architecture of the mobile teleoperation system. Software The main software developed can be divided into three main parts: mobile server, camera server and mobile / camera client. Mobile Server: In this system, the main service is moving the robot and sending feedback. As seen in Figure 7.17, which is a flow chart of the mobile server, the mobile/camera client and the camera server, this is mainly what the mobile server does. Except that the server does not execute the request blindly, it first checks the proximity sensors and based on that a decision is made according to the obstacle avoidance algorithm flow chart shown in Figure 7.7 and discussed previously. After the velocity to be set is decided, it is sent to the local controller for execution. Then the server checks the actual velocity of the robot and subtracts that from the original desired one Vm and sends it back to the client as force feedback. In the case where the action reference is chosen to be the control sequence number, the server would execute the velocities for 250 milliseconds; after which, if no new command is received, it would time out and stop moving waiting for the next command. On the other hand, in the case where the action reference is chosen to be the distance traveled by the robot, the commands are not updated until the robot moves the 103 Motion Server connection Client Connect to Robot Servers No Yes Button 1 Pressed? Translate position Translate position Mama connection of joystick to of joystick to Co ds velocity commands camera commands ‘ received? i 6 Send velocity Send camera commands to commuds to Motion server Camera server Check sensors for obstacles Send commands to camera Reduce speed i according to Send obstacles acknowledgmznt H ¢ to client Measure actual force velocity and H feedback error Figure 7.17: Mobile server, Mobile/ Camera Client, and Camera server general flow chart. distance period. In case a new command is not received by the time the robot finishes the distance period, the robot will stop moving and wait for the next command. Camera Server: This server is much simpler than the motion server, its job is to simply interface to the camera. This program provides the ability to control the on-board camera using Visca commands. The operator sends pan, tilt and zoom commands, and the server relays these to the camera. This gives the operator the ability to change the field of view to allow close coupling to the environment. When the client sends camera commands (pan, tilt and zoom), the camera server sends these commands to the camera through the serial port and sends back to the client a confirmation of the request. 104 Mobile/ Camera Client : This is the program that runs on the local machine and communicates with both servers and the joystick. The client sends commands to either of the servers based on one of the buttons, whether it is pressed or not. If the button is not pressed the joystick position will be translated into velocity commands and sent to the mobile server. On the other hand, if the button is pressed, the position of the joystick is translated into pan and tilt commands for the camera and two other buttons specify whether zoom in or out is requested. In addition, based on which state the client is in, it will either send the force command back to the joystick once feedback is received or just wait for acknowledgment. The communication with the joystick is done with DirectX technology, and with the servers it is done over the Internet. In addition, VIC was used to transmit video feedback from the on-board and the overhead cameras to the operator. 7.4.2 Experimental Results Several experiments were done with this system, where two setups were used for experimentation. One is where the robot is controlled over the same LAN, and the other is where the robot was controlled over the Internet either from Hong Kong or from Japan. The event-based reference was taken as either the control sequence number or the distance the robot traveled. Teleoperation Over the Same LAN There are three scenarios for the control over the same network; first case is normal operation, second case is simulated 2 seconds of round trip delay and third case is simulated random time delay. For all these cases, the event-based reference is taken to be the control sequence number. First, to test the stability of the system, it was Operated in a friendly environment 105 8 § '3’ Desired Y Velocity lb 8 o é goes Actual v Velocity E”? g 3 Distance to ; . - L 0 L Q i .' 0 200 400 600 800 1000 0 200 400 600 800 1000 Figure 7.18: The behavior of the system under normal operation with relatively far obstacles. with relatively far obstacles and the velocity tracking was examined. In this experi- ment, the operator tries to keep away from obstacles (about 1 meter away), but with no other constraints. Meaning there is no predefined path and the operator is free to move around randomly. Figure 7 .18 shows a plot of the operation of the robot with most of the obstacles being more than 1 meter away. The first row shows the desired :1: and y velocities, the second row shows the actual :1: and y velocities, and the last row gives the distance detected at each event—based reference. All of these plots are with respect to the event-based reference and not time. It is clear that close tracking of the desired velocities is achieved, except when the robot gets to obstacles which are closer than a meter. 80 when no obstacles are found the system is showing close tracking performance and is stable. In addition, the system is event-synchronized, since the force rendered at a certain reference corresponds to the distance detected 106 .é‘ 150 3 500 . to 0 1 2 2 : I $100 2 o s - I .2 3 E50, ............ (g _500,. ........ . < P B o - . . '6-1000 - - - o 100 200 300 400 3 o 100 200 300 400 a 200 3 200» > > X 0 L >. 0. E20. 2-... ‘. s . _ 3 “0° ‘ ‘ ‘ -400 0 100 200 300 400 400 ~. 400 e s > > x 0 >- 3 : - §_2oo.. .,. , g 0 -m 4 _2m - ; i 0 100 200 300 400 0 100 200 300 400 300 - . 200 - § § .2 .00-. e x >- 0 -100 .31500 g a § 81000 8 2 .9 § 500 g :3 e 5 0 fl ; ; i O ; ; - 0 100 200 300 400 0 100 200 300 400 Figure 7.19: The behavior of the system under normal operation. at that same reference value. When the environment becomes more hostile, that is having several close objects, the behavior of the system changes since now obstacle avoidance is taking place. In all the following experiments the operator is not constrained, complete freedom is given to move around in any direction or fashion and to get close to obstacle as desired. To understand the system operation under these conditions consider Figure 7.19, which shows the results for the normal operation case. The first row gives a plot of 107 Desired Y Velocity DesiredXVetocIty § § é 0 § § § O § § & 8 g 300 T: g 300 IL 2m . u_ 200» ‘ >- + 100 , . . .. + 100’ - g 0 i § 0 o ; o > g > 5 -100 : >_ -100 $3-200 § g-zoo _3m L x. _3m L Q - 0 100 200 300 400 0 100 200 300 400 Figure 7.20: Comparison between the desired velocities and the sum of the actual velocities and the forces felt. time versus 3, the event-based reference. It is clear that s is non-decreasing function of time, which is very crucial in ensuring the system stability. The other plot in the first row is the desired rotational velocity. The second row shows the desired velocities in :1: and y directions, which are called Vm. The third row displays the actual velocities in both directions, which are called Va. The fourth row gives the forces that are rendered by the joystick in both directions, which are referred to as Tm. The last row displays the same plot, which is the closest distance detected to obstacles in the environment. Looking at the actual velocities in the third row, it is clear that it is changing according to the distance detected; for example, if the distance gets closer the actual velocity decreases. When this occurs, the fourth row shows that the force increases as expected. Thus a close object causes the robot tracking error to increase by that conveying to the Operator that an obstacle exists. Also note that whenever the robot gets to the critical distance of 0.5 meter the actual velocity becomes zero, which applies to both a: and y directions. At the time when the robot gets further away from obstacles the actual velocity starts tracking the desired one again, by that reducing 108 X Component. Normal Operation Y Component. Normal Operation as Tracking Error 8 Tracking Error 0 300 v 400 3 200 " 3 200 "a a 2 100 2 0 I? 0 ‘4 tg-ZOO _1m 4' ' L 4w 4 I '. O 100 200 300 400 0 100 200 300 400 Figure 7.21: Comparison between tracking error and force felt for normal operation. the tracking error and therefore the force. Considering the sum of the actual velocity and the force it should be equal to the desired velocity. This is what Figure 7.20 is showing, the first row presents the desired velocities and the second row shows the sum of the forces and the actual velocities. It is clear that the plots are similar although not identical. The difference is due to the fact that the forces rendered are a filtered version of the tracking error as seen in Figure 7.21. This figure shows the tracking error and the actual force felt, which is clearly a low pass filtered version of the error. This is done to smooth the force felt and eliminate abrupt changes. That is why the plots in Figure 7.20 are not identical, since whenever the tracking error changes suddenly the force would change slowly, and that is why the differences in these plots are at the points where the actual velocity is directly reduced to zero, by that causing a big sudden increase in the tracking error. In addition, the system is event-synchronized, since the forces reflect the current distance conditions. As for the second case, where an additional round trip delay of 2 seconds is induced in the system, the results are shown in Figure 7.22. This figure’s layout is similar to the one shown in Figure 7.19. These plots show that the system has a similar behavior 109 m 1 €020> .acoaflom 3300 Tam m ammo 11 8508595... w 1 100 0 3;.“ $8.9 > 3.80 2.8; x 8.38 150 0 3.3 Li mmomammoma >gonzo> x .022 150 0 mama. swam. glx 150 0 150 0 Magnumeeoofiea m ..w. . ........... ...m .............. ...w M a. m a essenceoocsaa Figure 7.22: The behavior of the system under 2 seconds of round trip delay. 110 300 g 200 o 100 > >- 0 £400 8 -200 —300 300 400 f zoo g x > 200’ 4‘ 1m + f 0 § 0 O O > > -100 : x 3 >'-2oo 3-200 g g -300 i 4 400 # o 50 100 150 0 so 100 150 Figure 7.23: Comparison between the desired velocities and the sum of the actual velocities and the forces felt under 2 seconds of round trip delay. §§ Tracking Error 0 '§ sect Force Applied o 3 t 0 Figure 7.24: Comparison between tracking error and force felt for the 2 seconds delay operation. 111 to the case where no additional delay exists. The actual velocity tracks the desired one until an obstacle becomes close, then the actual velocity starts deviating from the desired one, by that increasing the tracking error. This increase would increase the force felt. The opposite happens when the robot is getting away from an obstacle, the tracking becomes closer and the error, thus the force, decreases. Again adding the force and actual velocity would give a result similar to the desired velocity as seen in Figure 7.23. The cause of the differences is again the filtering that was mentioned before and seen in Figure 7.24. The third case includes simulated random time delay, that ranges from 1 to 4 seconds. The results of one such experiment are shown in Figure 7.25, which has a layout similar to Figure 7.19. These results are similar to the results of the previous two cases. The tracking error, that is the force, is a function of the distance detected. The closer the distance the more force is felt and the slower the robot moves, until it comes to a complete stop. Then as the distance increases, the force decreases and the robot follows the commands closer. A plot of the sum of the forces and the actual velocities and of the desired velocities is shown in Figure 7.26. Here too, the errors are due to the filtering illustrated in Figure 7.27. These results show that the robot is stable and that the operator is event-synchronized with the robot, since the force is felt at the same event-reference the obstacle is de- tected. More importantly, is that this stability and event-synchronization is unaf- fected by the time delay, which can be constant or random. 112 ...... _. ...m ..m. m '1.me ...m .m - w .. r W o .H H .u o m H H .. macaw “mom.“ macaw. a m o m. m m .m. o 2.8; .2233 8.3.8 38:5 > 8.38 2.8; > .35 8.8 > £330 9 Sauna 4. m a a .l m . r r m . . m m a .............. m .m. ................. .m ..w W 00 m. o w o 0.. a m m mm mm. a m m a 8:88 2 8.: egg x 3.38 20380 2 8535 Figure 7.25: The behavior of the system under random round trip delay, ranging from 113 1 to 4 seconds. aoo . 300 . zoo-- 200’ §1m 3 1m .. ...... > > X 0 >- 0 3.100» E40... 8 8 -200 _2m .. -300 -300 ‘ 0 50 150 g 400 7 g 400 x 200 g ,. 200» + . + E : 5‘ g o ; 3 o > 3 g X ~ >. g-ZOO g-Zm . -4oo 9 - —400 4 0 50 100 150 0 50 100 150 Figure 7.26: Comparison between the desired velocities and the sum of the actual velocities and the forces felt with simulated random delay. 0 50 100 150 O 50 100 150 Figure 7.27: Comparison between tracking error and force felt for random delay operation. 114 Teleoperation from Hong Kong As for the experiments done with the Robot Control Lab in Hong Kong, the control frequency3 experienced was between 1.4 and 1.5 control cycles/seconds. Some of the results of these experiments are shown in Figure 7.28 and Figure 7.29. These plots are similar in layout to the ones in Figure 7.19. In all Of the following experiments, the Operator is not constrained, complete freedom is given to move around in any direction or fashion and is allowed to get close to obstacles as desired. The first plot in the first row is time since the beginning of the connection versus the event-based reference, 3, which is taken to be the control sequence number. It is clear that s is a non-decreasing function of time. The second plot in the first row is the desired velocity in the 0 direction. The seconds row shows desired velocities in the a: and y directions. Under these, given are the actual velocities and forces in the :r and y directions. The last row gives the distances to the closest Obstacles detected. Observing these figures, it is clear that the forces increase as the robot gets closer to Objects and decreases as it gets further. Meanwhile, the Opposite happens to the actual velocity, where it becomes smaller than the desired one as the robot approaches an object, and starts tracking the desired velocity as the robot pulls away, which reflects the stability of the system. In addition, the actual velocity and the force rendered respond to the distance detected at almost the same event reference as this distance, which shows the event-synchronization between the operator and the robot. As discussed before, the sum of the actual velocity and the force should approach the desired velocity. This is shown in Figure 7.30, where the first and third row show the desired velocity and the second and fourth row show the sum of the force and the actual velocity. As expected, the two are not identical although they are very close. This is a result of the force being a low pass filtered version Of the velocity 3Control frequency is the number Of times per second that the control commands are sent and the feedback received, this gives a measure of events occurring per second. 115 ty 150 - g 1 - - f 8 > 05, ................................... ., §1mb ....................... W 2 . '3 o “i S '- ................................ 0 g 50* £435 + '3 B 0 "' _1 L L 0 so 100 150 200 § 0 so 100 150 200 0 . .3' 5 , 3 _ g-ZO 5° 3 x > L . g4)"- .3 a. 8 : § : 2 —60 ‘ -400 - L 0 50 100 150 200 0 50 100 150 200 o . . 400 - 2' : : a B W : g 200 0-20 .................. - ....... : ...... 0 > . . > x 3 ' >- 0 EM ................... : ............. a g ; 5-200 ; _w L i 4‘ _m L L L 0 50 100 150 200 0 50 100 150 200 1° T 20° 7 r “ml ........ ........................ g o ...... ........................ . >- : _1m... _, -200 i - : 0 50 100 150 200 $1000» 5 2 i , ,4 Q 5 0 L L L 0 50 100 150 200 0 50 100 150 200 Figure 7.28: The behavior of the system during the control from Hong Kong. tracking errors. This filtering smoothes the forces as seen in Figure 7.31, which gives the tracking error in the first and third row and the force rendered in the second and fourth row. To test the sensitivity of the system to the time of day or the network load and to test for event-transparency, the experiments were repeated several times on different days. Figure 7.32, which has a layout similar to Figure 7.28, gives a set Of data 116 .3 150 , ~ 31000 g g 5m .................................... $100 ....... g : .9 . 'n i E 0 : "' 50 .......... _ .......... O .. g . a: _500... ........ , F L g ‘ i 0 ‘ i "-1000 -. . O 50 100 150 200 g 0 50 100 150 200 m V ' ‘— 3‘ 400 ' > > x 0" . >- 0 g-zoo- $409 : -m L _._ m m L O 50 100 150 200 O 50 100 150 200 400 r 400 Y .Z‘ : p : 3 200 g 200 § g x 0 g >- 0 3 : a §-200 3 §-2oo -4oo - % -400 0 50 100 150 200 0 50 100 150 200 100 s 5°" 8 x 0.. -50 gm "82000 - a a £1500 £1500 O 0 91000 9.1000 8 500 - § g E 5 500 Q 0 ' D o L ' L 0 50 100 150 200 0 50 100 150 200 Figure 7.29: The behavior of the system during the control from Hong Kong. collected at a different day. In this figure the same tracking and Obstacle avoidance behavior as in the previous cases is Observed. As for Figure 7.33, the similarity between the desired velocity and the sum Of the force with the actual velocity is clear. Figure 7.34 simply shows the tracking error and the actual force rendered just to illustrate the filtering process. Since, regardless of the delay, similar control responses were Obtained, a consistency is clear. This consistency is nothing but event- 117 400 f f 400 .Z‘ .Z‘ 3 3 > > x 0 >- 0 .5_2m '5_2m,.. 8 g 8 -400 4 -400 4 * 4 50 100 150 200 0 50 100 150 200 as as Actual X Velocity + X Force 0 Actual Y Velocity + Y Force 0 -200 f -200 .400 - i -400 0 50 100 150 200 0 o 400 if "3 200 3-20'“ 6 > > x >- 0 E40» 3 3 sm- as § § '3 § Actual X Velocity + x Force l I e s Actual Y Velocity + Y Force a '§ s 100 150 200 o 100 150 200 0 Figure 7.30: Comparison between the desired velocities and the sum of the actual velocities and the forces felt for the two Hong Kong experiments. transparency. Another design of this system was the one where the event-based reference was taken to be the distance the robot traveled. The results of teleoperating such a system are shown in Figure 7.35, which has a layout similar to Figure 7 .28. It is clear that the performance Of the system is similar to the previous cases in terms Of stability 118 X Component. Normal Operation From Hong Kong Y Component. Normal Operation From Hong Kong ' Y r 400 r f Y 8 g Tracking Error 0 § Tracking Error 0 50 100 150 200 0 50 100 150 200 Figure 7.31: Comparison between tracking error and force felt for the two Hong Kong experiments. and event-synchronization. As for Figure 7.36, it shows the similarity between the desired velocity and the sum Of the force with the actual velocity. Figure 7.37 plots the tracking error and the actual force rendered just to illustrate the filtering process. 119 m w .olomoWw.omom.ow o w w.roo .3 3.- .3- .3 .33 2.8.; .8888 3.28 28.; > .888 83“. > + 28.8 > .83 85". > 8380 2 85.8 . . m m m ....... m m .m. ....... w ...w imw ..w W w w W x p w 0 .0 . o b o m w . 00 a. a. m. a. a m m m m a - 33 9 9 9 9 . . _ . oQoux 8:88 .... as: 2.8.; x “.238 8.8 x + zoo; x .83 8980 o1. 8535 Figure 7.32: The behavior of the system during Operation from Hong Kong. 120 O -200 ‘ ActualXVolodty+XFomo '§ 0 50 100 150 O 50 100 150 Figure 7.33: Comparison between the desired velocities and the sum of the actual velocities. and the forces felt for teleoperation from Hong Kong. 5 Tracking Error inY i 8 o 8 5 100 150 O ass 5 Force Applied inY 1'» o a O 50 100 150 0 50 100 150 Figure 7.34: Comparison between tracking error and force felt for teleoperation from Hong Kong. 121 so 5 1 '5 $50 .r/ E 0.5 540 g 0 II 820 1 -0.5 l: 3 ‘6 0 8 -1 o 50 100 150 200 0 so 100 150 200 400 400 O 0 3. o J a o 4 w E § 25-200 ' -200 a 8 400 400 0 so 100 150 200 0 so 100 150 200 400 400 5200 $200 ° 1 l > 0 > o .L 5 [V L i W” ‘J U 3.2.... . g-..» 400 400 0 so 100 150 200 0 so 100 150 200 400 500 20. | g 0 ._ '1 i 1% 0 a“ - VA' VLVL €.V__L x Vi s Til 400 -500 o 50 100 150 200 0 so 100 150 200 1500 1500 % s £1000 ~ 10°05‘54“ 1‘ ~ 9 'W” 95; ll” WWII V’ V a l l l' a l l l' 5 o 5 0 0 so 100 150 200 0 so 100 150 200 Figure 7.35: The behavior of the system during operation from Hong Kong with event-based reference as distance. 122 Desired X Velocity ActudXVeiodty+XForce §§§ DesiredYVelocity 8's 100 150 200 o 8 §§o§§§ ActuaiYVelodty+YForce o s § § § Figure 7.36: Comparison between the desired velocities and the sum Of the actual velocities and the forces felt with event-based reference as distance. Tracking Error in X it ForceAppledinX é a ...l § 0 fi-s: O § 0 TracidngErrorinY §§§ 8 8 ForceAppliedinY o 1" l 8 Figure 7.37: Comparison between tracking error and force felt for teleoperation from Hong Kong with event-based reference as distance. 123 Teleoperation from Japan The same mobile robot system was teleoperated from Japan. In this case the event- based reference was taken to be the control sequence number. The results of this experiment are shown in Figure 7.38, which has a layout similar to Figure 7.28. As seen, the performance is similar to all the previous cases. The robot tracks the desired velocity as long as there are no close obstacles, once obstacles are detected the robot slows down and the force rendered increases. This shows that the system is stable. In addition, the response of the system is very fast with respect to the event-based reference, which reflects event-synchronization. Figure 7.39, which is similar to Figure 7.36, shows that the sum of force and the actual velocity is very close to the desired velocity. As for Figure 7.40, it gives the tracking error in the first row and the force rendered in the second row, where it is clear that the force is simply a filtered version of the error. All these experiments done with the mobile robot reveal several facts regarding the performance of event-based control and planning. It is clear that the system is stable under random delay conditions and under different choices of the event-based reference. Also the system is event-synchronized, since the force rendered is a re- flection of the most recent state of the robot. In addition, the system is exhibiting similar performance between the no delay case, where it was operated over the 10- cal network, and the random delay case, whether simulated or real. Therefore, the system is event-transparent despite random delay. Note that this performance was obtained regardless of the operator, the environment or the delay. 124 100 w 0 2 0 mmowm 1 1 .58.; .2038 2:38 /. . /. m x105 _,/ 20 6 6 6 8 H 1 n N 3 W 3 3 9 9 9 9 aux—8359b; 100 l J U I 20 mmomm 52% > 3.80 100 A 100 20 100 w 0 2 O ommwmmo 2.8.; x .83 100 r I 20 JWMJ [LN WU 1000 100 w 209.30 9 8535 60 20 100 20 100 125 60 20 Figure 7.38: The behavior of the system during operation from Japan. 300 300 200 200 E 100 E 100 > 2 x O >- 0 g 100 g 100 -200 -300 -300 0 20 4O 60 so 100 0 20 40 60 so 100 WXYM+XFOIOO g g g o § § WYVolodty+YFOl00 05 5 o § g g 3 s 8 3 § 8 s 3 3 § Figure 7.39: Comparison between the desired velocities and the sum of the actual velocities and the forces felt for teleoperation from Japan. TrackingErmrinY § é o § § § § Tracking Errorinx 3 s a 3 3 § I ‘ ForceApplodinx ForceAppfiodinY Figure 7.40: Comparison between tracking error and force felt for teleoperation from Japan. 126 7.5 Mobile Manipulator: Velocity Control The capability of mobile manipulation is a key to many robotic applications. A mobile manipulator generally consists of a mobile platform and a robot arm. The research on mobile manipulators focuses on motion planning, modeling and control, and force control. Although teleoperation of mobile robots and manipulators have been discussed in the literature; however, the teleoperation of mobile manipulator is relatively new and more diflicult. The general form of a mobile manipulator teleop- eration system is depicted in Figure 7.41. In this system the operator sends velocity commands and the remote robot feeds back force and video information. This sec- tion presents the implementation of an event-based mobile manipulator teleoperation system along with experimental results. Video Video Feedback Feedback 4— ‘— I N T E R N E Force T Force Feedback Feedback ‘— Velocity Velocity Commands Commands Figure 7.41: General structure of a mobile manipulator teleoperation system. 127 7. 5. 1 System Implementation The implementation discussed will cover both the hardware and the software details of the system developed. Hardware The hardware architecture of the system developed is shown in Figure 7.42. The joystick used is a programmable Microsoft SideWinder Force Feedback Pro. with 3 degrees of freedom. The mobile robot is a Nomadic XR4000 and the manipulator is a PUMA560. In addition, other components are used similar to the ones discussed in the general case. Hong Kong Michigan State University, USA Fm Xm r __ i F i ..5 Hum .5, Joystick Local PC - 7 ll 0PM”; . l WIN98 1.. FP F? -: l ..c ...: .‘O: ’59:." __ m Tia-1:1; .1 OnboardPC ARI if}; uNux "n g. .o . . ‘h . a '- u- Figure 7.42: Hardware architecture of a mobile manipulator teleoperation system. Software The software developed can be divided into three main parts: client, manipulator (PUMA) server and mobile server. Client: After connecting to the PUMA server, the client sends velocity com- mands, Vm, to the mobile manipulator and relays force commands back to the joy- stick once feedback is received. Communication with the joystick is achieved with 128 MS DirectX technology, and that with the server over the Internet using TCP. Puma Server: The puma server acts as a server for the client. Once it receives the velocity command from the client via the wireless ethernet, it figures out what part should be executed by the Puma and what part should be executed by the mobile. Then it sends the torque required to the Puma joint motors. At the same time it forwards the velocity required by the mobile to the mobile server. Communication between the two servers is done using TCP via two Ethernet cards connected to each other on the on—board PCs. In addition, this server uses the force/ torque sensor connected to the gripper of the manipulator to detect forces applied on the arm and forwards these to the client to be played back to the operator. In addition to the communication, the Puma server is also responsible for the control of the robot hardware. Mobile Server: This server receives velocity command from the Puma server and sends these commands to the mobile motors. At the same time it achieves obstacle avoidance by checking the sensors (sonar and laser) and stopping in case an obstacle is detected closer than 0.5 meter. In addition, the Operator receives video feedback from the environment using VIC (Video Conferencing Tool). VIC was used to transmit video feedback from the on- board and the overhead camera to the operator. 7. 5.2 Experimental Results Concerning the experiments done with this implementation, they are divided into two types. The first type is where an operator controls the mobile manipulator locally and the other type is where an operator in Hong Kong controls the robot via the Internet. Since both experiments gave similar results and since the second scenario is more interesting, included here are only results from the testing done with Hong Kong. 129 0.06 0.06 $0.04 $0.04 £0.02 20.02 X >~ 0 . . o 4 0.02 0.02 0.04 «o 04 -0.06 0 06 0 10 20 30 40 50 o 10 20 30 4o 50 actual velocity actual velocity 0.06 0.06 $0.04 30.04 —# £0.02 . 30.02 g x >~ L o —~'h:, _ .. “Valli 0 1 s 002 K 002 LL -0.04 -0.04 w 006 -0.06 0 10 20 30 40 50 o 10 20 30 40 50 Figure 7.43: Velocity performance Of the mobile manipulator teleoperation system. The experimental procedure had the operator in Hong Kong move the robot in x and y directions randomly. Meanwhile, a person subjects the robot gripper to forces in the a: and y directions randomly. The event-based reference was taken to be the control sequence number. The average frequency Of communication was 3.1Hz, which implies that on average there were 3.1events/ sec. The main interest was in how close the actual velocity Of the robot tip was following the desired velocity specified by the Operator, and in how close was the force felt by the Operator close to the one detected by the force/torque sensor. In other words, how were the operator’s intentions executed by the mobile manipulator and how the environment was felt by the Operator, despite time delay and time delay variance. The results are shown in Figure 7.43 and Figure 7.44. In Figure 7.43 the first row gives the operator’s desired velocity in the a: and y directions with respect to the event-based reference. The second row is the actual velocity Of the arm tip in 130 Foroeapplledontherobot Forceappllodontherobot 10 15 A 5 A10 g a a A x >~ o 5 l l i J .5 v , l -10 r -5 l r -15 -1O 0 10 20 30 40 50 0 1O 20 30 40 50 Force lelt by operator Force lelt by operator 1O 10 R i 5 ’2‘ 5 fi'l ll 3: 3'. V “NJ 0 —Hu‘w Ditl {Var 0 U _5 k .5 - -10 ~10 0 10 20 30 4O 50 0 10 20 30 40 50 Figure 7.44: Force performance of the mobile manipulator teleoperation system. the :r and y directions with respect to the event-based reference. The first thing to note is that the desired velocity is a step function since it is a sampled version of the continuous motion of the joystick, where the sampling rate is a function of the time delay faced and the advance of the event-based reference. More importantly it is clear that the actually velocity is almost a continuous version Of the desired one and it is tracking it almost perfectly, which implies the stability of the system. As for Figure 7.44, the first row is the force detected by the force/torque sensor and the second row is the force felt by the Operator with respect to the event-based reference. As for the force felt, it is clear that it is a step function that is almost tracking the actual one. It is clear from these results that the system is stable since the actual behavior of the robot is tracking the desired one and the force felt by the operator is very close to the actual one the robot is detecting. In addition, the system is event-synchronized 131 since the change in the rendered force occurs almost at the same reference as the change in the detected force. 7.6 Mobile Manipulator: Position Control As discussed in the previous section, mobile manipulators have several usages and advantages. However, using velocity control as shown in the previous section does not allow for the use of all the degrees of freedom of the arm and lacks high accu- racy. TO achieve this, the Phantom haptic device is used as a master instead of the MS Force Feedback joystick. In this case, the operator supplies position increment commands and the robot feeds back actual force resulting from the interaction with the environment. The general architecture of such a system is shown in Figure 7.45. This system is experimented using tasks such as picking up and manipulation. The objective from this system is to illustrate the generality of the approach and to show that the system can have control commands other than velocity. This illustrates that the design is independent of the specific commands used. Also this system renders tasks requiring high levels of accuracy feasible. Remote Site (HongKong/Japan) Local Site (Michigan State) Force 1 Force fl Feedback N Feedback 1 .. E R Position N Position { Commands E Commands T Figure 7.45: The architecture of a supermedia enhanced PUMA manipulator teleop— eration system 7. 6. 1 System Implementation This section gives the details of the hardware and software utilized and implemented as part Of this system. Hardware The hardware used is similar to the one shown in Figure 7.42. The only difference from the system discussed in the previous section is the use Of a Phantom haptic device instead Of the MS Sidewinder Force Feedback Pro joystick. Software Similar to the system discussed in the previous section, the software developed in this case can be divided into three main parts: client, manipulator (PUMA) server and mobile server. Client: After connecting to the PUMA server, the client sends position increment commands to the mobile manipulator and relays force commands back to the Phantom once feedback is received. Also the Operator has the capability Of Opening and closing the gripper. Puma Server: The puma server acts as a server for the client. Once it receives the position increment commands from the client via the wireless ethernet, it figures out what part should be executed by the Puma and what part should be executed by the mobile. Then it sends the torque required to the Puma joint motors. At the same time it forwards the position required by the mobile to the mobile server. Communication between the two servers is done using TCP via two Ethernet cards connected to each other on the on-board PCs. In addition, this server uses the force/ torque sensor connected to the gripper Of the manipulator to detect forces applied on the arm and forwards these to the client 133 to be rendered to the Operator. In addition to the communication, the Puma server is also responsible for the control of the robot hardware. Mobile Server: This server receives position commands from the Puma server and sends these commands to the mobile motors. At the same time it achieves Obstacle avoidance by checking the sensors (sonar and laser) and stopping in case an Obstacle is detected closer than 0.5 meter. In addition, the Operator receives video feedback from the environment using VIC (Video Conferencing Tool). VIC was used to transmit video feedback from the on- board and the overhead camera to the Operator. 7. 6.2 Experimental Results Several experiments where carried out from Hong Kong via the Internet. The Operator commanded the manipulator using the Phantom haptic device, which was used to render force information fed back from the environment. Since the control is event- based there is a need to investigate the integrity Of the force signal being sampled with respect to time. That is there is a need to compare the fed back force signal, which is sampled with respect to the event-based reference, with the continuous force sensed with respect to time. For this reason Figure 7.46 gives in the first row the x, y and 2 components of the continuous force sensed with respect to time and in the second row the x, y and z components Of the discrete force fed back with respect to time. It is clear that the signals are very similar, with the fed back signal shown in the second row being a low pass filtered version of the continuous signal shown in the first row. In addition, to check the integrity Of the received force signal compared to the fed back signal Figure 7.47 is provided. The first row plots the x, y and 2 components of the force signal fed back and the second row plots the :13, y and 2 components of the 134 ss 3? O owns... Force Sensor! v (mN) I 8 O Cbntlnuous Force Sensor! 2 (inN) §§§8°$§§ Conthuous Force Seneca X (mN) § 1. § '§ 200 400 600 800 mace/10) *— 1 1“) A 50’“ " "‘A 50 ...................... .26. E X 0’“ __ ...N o... .......... I; §_1w ....... lu- . : 0 .~-4 I t 5 £450.”, o ”(a 2 ; .. _2m. ............. 200 400 600 8“) 200 400 600 m Whoa/10) Tho(sed10) Figure 7.46: Comparison between the continuous force sensed by the robot and the discrete force fed back both with respect tO time. force signal received by the master all of which are with respect to the event-based reference. As seen, except for small errors due to rounding, the two signals are almost identical. Since the force received is discrete there is a need to smooth it out before ren- dering. For this end the force planner, which runs at a higher frequency than the communication loop, adjusts the desired force at small increments so that no sudden jerks are felt. This is illustrated in Figure 7.48, which compares the force received with the actual force rendered by the master shown in rows one and two respectively. As expected the actual force rendered is simply a smoothed version of the force received. Those figures show that the rendered force is almost identical to the received force, which is almost identical to the fed back force, which is almost identical to the actual 135 J 8 553 s O O 2'? l d §§s Discrete Force Sensed Y (mN) l -b I 8 :9 Discrete Force Sensed Z (mN) Discrete Force Sensed X (mN) "é 20 40 60 80 20 40 60 80 Event Nurrber Event Number l‘H ... 4 4 ES ..................... 8 C O is Discrete Force Received X (mN) Discrete Force Reoerved Y (mN I 8 Discrete Force Received Z (mN) 59‘ § . . ...... ...... .... -1 ........... , . ._... . . . . . . 20 40 6o 00 20 40 60 so 20 40 60 60 Event Nurrber Event Nun’ber Event Nurrber -100 Figure 7 .47: Comparison between the force fed back from the robot and the force received by the master both with respect to the event-based reference. force. Therefore, it is clear that the force sensed by the robot is sampled, transmitted, received and rendered uncorrupted. As for the manipulator control, the comparison between the desired and actual positions is given in Figure 7.49. In this figure, the first row shows the 2:, y and .2 components of the desired position and the second row shows the x, y and 2 com— ponents Of the actual position all with respect to the event-based reference. For all the components the signals are very similar with most Of the errors being due to rounding and inaccurate position sensing. This shows that the system is stable under event-based planning and control regardless of the delay encountered. 136 0 I I I I I I I I 2 _50._ ........... ( _, E. 8 §__1m_ ......................... —r 8 C 6‘6 §_150r_ ................ -i o tL x_200r. ........... , ............. _. 2 Z 0 . g ; 5-250 ................. : _300_ 1L1'—"'”iw”l ........... l ........... .1 .......... 1 .......... F 10 20 30 4O 50 60 7O 80 nme(sec/10) O I I I I I I I E _50(..........,... .................... —4 8-100“ .......... ............................ f_, C ' I 0 . . a: : : §_150._ ........ -_ 0 ~ . LL ; L X ‘. : cur-200— g~ a 3 . g : : .g : : §_250_ ............................ ._ 20 3O 40 50 60 70 80 Time (sec/10) Figure 7.48: Comparison between the discrete force received by the phantom and the actual continuous force rendered by the phantom both with respect to time. 137 1, ~ 1.2» 1’ E E .5 0.8» 3 8 B : g E E 04( ....... ....... Z' 8 o «o 3 . ', 2 .2 2 2 9 .2 0 4 < s s s L O O 0412.. . I -0.4 20 40 60 20 40 60 20 40 60 Event Number Event Number Event Number 1’ . . . 0.5 < 1» l E E». 0 "E 3 8.0.5 . 3 rn (I) w s < C C (I) <3 '1" “(3 x -<>— N 93 « 2 2 .8 .8 ‘2” ‘8 0 «0.4.5, ‘0 -3[‘ . : 20 40 60 2O 40 60 20 40 60 Event Number Event Number Event Number Figure 7.49: Comparison between the desired position and the actual position both with respect to the event-based reference. 7 .7 Multi-Site Multi-Operator Tele-Cooperation Most Of the work done relating to cooperation in robotics can be divided into two categories: human-robot and multi-robot. Human-robot cooperation occurs when a human and a machine work simultaneously together to achieve a task, which is impossible for the human or the robot to carry on alone. This approach tries to combine the human intelligence with the robot’s physical strength [108] [109]. Multi- robot refers to instances where two or more robots work together to finish an Operation that might be difficult for one robot to perform or an Operation which, by its nature, 138 Remote Sites (HongKong/Japan) Local Site (Michigan State) Site 1 Force ............. Force [5‘17 Feedback Feedback if .. :. .I. . 4— ‘ ———> Manipulator 1 Manipulator Commands N Commands T . E Site 2 Force R Force Feedback N Feedback 4—— E ‘— T —> Mobile Mobile Commands Commands Figure 7.50: The structure Of a multi-Operator at multi-site collaborative teleoperation system. might require group work [110]—[112]. Multi-Operator at multi-site collaborative teleoperation is a combination of teleop- eration and cooperation, where several Operators at different remote sites are trying to collaboratively control a robot or several robots to achieve a certain task [113]. The motivation behind tele-cooperation is the existence of tasks that require several different expertise to collaborate concurrently. In cases where these expertise are at different locations, real-time collaborative teleoperation via the Internet is the answer. A special case of a tele—cooperation system is depicted in Figure 7.50, where two operators are collaborating to bilaterally control a mobile manipulator. This section details the implementation Of an event-based multi-Operator multi-robot teleoperation system along with experimental results. 139 7. 7.1 System Implementation Hardware and software details will be covered followed by experimental results. Hardware Figure 7.51 presents the hardware architecture Of a multi-operator multi—robot col- laborative teleoperation system in which one operator controls the mobile base and the other controls the manipulator. The specifications of the different hardware com- ponents are similar to the ones discussed in the previous sections. These different components interact and communicate in the following manner. After each Operator connects to a robot, the following sequence Of events is repeated until one Of the Operators disconnects. The Operators move the joystick to a certain position which corresponds to a velocity vector. Then the PC Of Operator2, who is controlling the manipulator, generates the desired velocity vector, VP, and sends it to the manipulator for execution. Meanwhile, the PC of Operatorl, who is controlling the mobile base, generates the desired velocity vector, Vm, and sends it to the mobile base for execution. Once the on-board PC (AR2), which is connected tO the manipulator, receives V}, it sends this desired velocity to the joint motors controller for execution. Then AR2 waits for the other on-board PC (ARl), which is controlling the mobile base, to be ready to carry out the feedback exchange. In parallel, once ARl receives Vm it sends this desired velocity to the wheel motor controller for execution. The controller does not execute Vm blindly but engages an Obstacle avoidance algorithm. Meanwhile, ARI waits for AR2 to exchange feedback—AR2 forwards V, to ARl and ARl forwards Vm to AR2. After this exchange is done, AR2 and ARI feedback Vm and V1,, to Operator2 and Operatorl respectively. This implies that the Operators are aware of each other’s intentions, which makes collaboration more efficient and safe. 140 #3.}. .; i: Hue-- ::'-':—"-‘-r we: 21.. .."-'_'.' ""'.'_J"'-- FM}. :3! ’m'tww 3'3“ Figure 7.51: Hardware architecture Of the multi-Operator at multi-site collaborative teleoperation system. Software The software develOped can be divided into five main parts: mobile server, puma server, puma controller, mobile client and puma client. Mobile Server: Its service is moving the robot and receiving feedback. However, the server does not execute requests blindly; it first checks the sensors and based on input from them makes a decision according to an Obstacle avoidance algorithm. Once the mobile server decides which velocity to set and sends it to the motors, it waits for the puma server to send the feedback V}. Then it would send Vm to the puma server and V1,, to the mobile client. Puma Server: This server is responsible for receiving the puma velocity com- mands, VP, from Operator2. Then it forwards these commands tO the puma controller. After that the puma server sends V, to the mobile server and receives Vm from it. Then Vm is forwarded tO the puma client. Puma Controller: This controller receives the desired velocity, V,,, and controls the puma joint motors. The puma controller is designed at task level. A singularity- 141 free hybrid motion controller is used to avoid the singularities of the robot arm [79]. Mobile Client: This client sends commands, Vm, to the mobile server and relays force commands back to the joystick once feedback is received. Communication with the joystick is achieved with MS DirectX technology, and that with the server over the Internet. Puma Client: This program sends velocity commands, V,,,, to the puma server and relays force commands back to the joystick once feedback is received. Communi- cation with the joystick is achieved with DirectX technology and with the server over the Internet. Although force feedback is very helpful it does not eliminate the need for visual feedback, which was supplied to the Operators from an overhead camera using VIC. 7. 7.2 Experimental Results This section describes two of the experiments that were done, in which the mobile manipulator (Robotics and Automation lab, Michigan State University), Operatorl (Robot Control lab, Chinese University of Hong Kong) and operator2 (Nagoya Uni- versity, Japan) were connected via the Internet. The delay experienced between these sites is random with no specific pattern or model. These two experiments encompass the two most popular modes Of collaboration. These modes are master-slave, in which one Operator is leading the task, and master-master, in which the two Operators are helping each other without having a leader. In the master-slave operation, one Of the Operators is requested to follow the force felt. This implies that the slave would eventually track the motion of the master. The results Of one such experiment are seen in Figure 7.52, where the operator in Japan is controlling the mobile (slave) and the Operator in Hong Kong is Operating the puma (master). The results show that the desired velocity Of the mobile is tracking that of the puma in real-time, which reflects stability. The tOp row Of Figure 7.52 shows a 142 ca 150 E‘ 1 1 - 2 .3 § 100 > i g 2 Ta 0 “ 50y .3 it s +— 0 - . .2 -1 - . 0 100 200 300 0 100 200 300 g 500 e g 500 . - 3 5 § § x 0 >. 0 2 2 25 25 o o 2 -500 - . 5 -500 0 100 200 300 O 500 . 500 Puma Y Velocity O Puma X Velocity O -500 A e -500 A A 0 100 200 300 o 100 200 300 500 500 X >- E .E E 0 E UJ LU -500 A A -500 A A 100 200 300 0 100 200 300 Figure 7.52: Multi-Operator at multi-site collaborative teleoperation system behavior while being controlled from Hong Kong and Japan. plot of time versus 3, the event-based reference. It is clear that s is a non-decreasing function of time, which is crucial for system stability. The other plot in the top row shows the desired rotational velocity. The second row shows the desired velocities Of the mobile in x and y directions, Vm. The Third row plots the desired velocities of the puma in x and y directions, VP. The plots of Vm and V}, also correspond to the forces fed back to Operator? and Operatorl respectively. The last row is the error between the mobile and the puma desired velocities in both directions. The main points to note, are the synchronization and fast response. It is clear that both robots are event-synchronized since the shift in direction occurs almost at the same 3 value. Fast response is clear from the sharp decrease in the error between the velocities of the two robots. 143 80 f 300 . a 36°l 5 .F 8 32m ....... . ............... §40 l > ; .s l g : E 2.71m" ...i .................. p.20) r: it 0 0 i. 0 so 100 150 200 o 50 100 150 200 400 400 a, . 2m.. €200 ..... D} ...... ....... ....... E § [’1qu i I I > > . . . x 0- > oA---L. F—Ijl'll” -- .3 ° : : M : if @200 5200 ....... ..... 400 400 A A A 0 so 100 150 200 400 400 a O > > x 0 >_ 0 400 400 0 so 100 150 200 0 so 100 150 200 Error in X ErrOr in Y Figure 7.53: Multi-Operator at multi-site collaborative teleoperation system behavior while being controlled from Hong Kong and Japan. The results Of a similar experiment are shown in Figure 7 .53. In this experiment the operator in Japan is controlling the Puma (slave) and the Operator in Hong Kong is Operating the mobile (master). The layout of this figure and the results it shows regarding stability and event-synchronization are similar to Figure 7.52. As for master-master collaboration, the two operators work together to achieve a certain task without having a leader. The Operator is free to follow or not to based on what is decided more appropriate. Since both operators are collaborating to achieve the same job, most of the time the commands are going to be similar. The only 144 differences are going to occur when there exists multiple paths to the destination. In this situation one of the Operators has to compromise in case a conflict is detected. A bases of this compromise might be to follow the mobile while far from the destination and to follow the puma when close to the destination. An example of this mode was experimented, where the operators’ task was to navigate to a table and touch a bottle placed on it. The experiment was successfully accomplished even with low video feedback quality and despite the visual limitations. 7 .8 Multi-Site Multi-Operator Tele-Coordination This section details the implementation and experimental results Of a multi-site multi- operator tele-coordination system. The experiments were conducted using two mobile manipulators as seen in Figure 7.54 [114] and according to the general architecture shown in Figure 7.55. In this setup, the Operator sends position increment commands and receives force and video feedback. Figure 7.54: The experimental setup. 145 Remote Sites (HongKong/Japan) Local Site (Michigan State) Force Force Feedback Feedback (I ,f ‘__. ‘— 1; , .- ——> ~——-> Position 1 Position Commands N Commands T E R N Force E Force Feedback T Feedback Position Posmon Commands Commands Figure 7.55: The architecture Of a supermedia enhanced multi-mobile manipulator tele—coordination system. The robots and the remote stations communicate via the Internet with no as- sumptions or knowledge regarding the delay characteristics. This system was tested between Michigan State and Hong Kong. This setup was utilized to create the worst scenario differential in delay faced, since the Operator at Michigan State faces rela- tively less delay than the Operator in Hong Kong. This allowed for an evaluation Of the efficiency and performance Of the event-synchronization algorithm implemented, which proved to increase the efficiency in the coordination. 7. 8. 1 System Implementation The hardware and software details will be discussed followed by the experimental results. 146 Hardware The general architecture of the system, which resembles the model shown in Figure 5.7, is shown in Figure 7.56. The two Operators use Phantom haptic devices to generate 3-dimensional position increments for the mobile manipulators. Once the mobile manipulators receive the command they exchange the event reference in order to maintain synchronization. Then if the force detected is equal or greater than the maximum allowed, that is the coordination index, the direction of desired motion is compared to the direction of force. If the direction of desired motion is in the reducing force direction the command is executed else it is discarded. Meanwhile, the 3—dimensional force detected is fed back to the Operator to be rendered using the Phantom device. The various hardware used has similar specifications to the ones discussed in previous sections. Hong Kong Michigan State X2 Human Operator 2 Phantom 2 Phantom l Human Operator 1 F2 Onboard PC Force Sensor ‘ Joint Motor Force Sensor Joint Motor Michigan State University, USA Figure 7.56: The architecture of the multi-operator multi—robot tele—coordination system implemented. 147 Software Most of the software used in this experiment is similar to the one described in Section 7.6. The only exception is that the manipulator server monitors the force detected and in case it is larger than the coordination index the motion is stopped. New commands will only be executed in case they are in the force’s decreasing direction. Another difference is the added synchronizing servers running on each Of the robots. In this implementation, instead of the clients connecting to the PUMA server directly, they connect to a synchronizing server. This server receives the commands from the client, waits for the other server to receive the commands from the other client, and then forwards the commands to the PUMA server. The synchronizing servers execute a two-way hand shake in order to synchronize at each command arrival. In turn, the PUMA servers feedback the force to the synchronizing servers, which in turn forward this force to the corresponding clients. In addition, visual feedback was supplied from two different view angles. 7. 8.2 Experimental Results Several experiments were conducted via the Internet, where the mobile manipulators picked up and manipulated a metal rod. Once the rod was picked up, the two robots calibrated the force sensors in order to deduce the weight of the object carried by each of them. Then this weight was used to deduct the weight of the Object and the inertial forces from the measurements. Three configurations were experimented, the first had the two robots holding the rod from the same side, the second had them holding it from Opposite sides and the third included a human locally applying forces to the rod. During the first two scenarios the Operators had a predefined pick and place task to be accomplished and in the third scenario the Operators were simply required to follow the local human applied forces. Figure 7 .57 shows the results using the second scenario with the robots on Oppo- 148 Controller Desicion ] —— ' P 't' Discrete osuongn) XComponent - MSU X Component— HK I l 10 9 » 8 7 ' , . 5 ’ 5 r 10 20 30 ’40 10 20 30 40 Event- Based Reference Event-Based Reterence Y Cormonent- HK Y Component - MSU 85 t ‘ 72 70 80 ‘ . . ‘ 1 ~ 1 68 _ 75 66 l 64 . . , i 70 . l 62 41‘? 60> . 10 20 30 40 10 20 30 40 Event-Based Reference Event-Based Reference 2 Component— HK 2 Component - MSU -32 > -38 p A —36 ' ; .' _42 L -33 » ‘44 ’ _ . i -40, ‘ - 46 f 10 20 30 4O 10 20 30 40 Event-Based Reierence Event-Based Reference Figure 7.57: Tele-coordination results of two mobile manipulators moving a metal rod. site sides of the rod. The first column of figures depicts the data from the mobile manipulator controlled from Hong Kong and the second column depicts these from the one controlled from Michigan State University. The x, y and 2 components are plotted with respect to the event-based reference in rows 1, 2 and 3 respectively. The gray line in these plots indicates whether the controller allows motion at that event- based reference. A high value Of the waveform corresponds to the case where the force detected is less than the coordination index or the desired motion is in the direction Of force reduction and thus motion is permitted. A low value Of the waveform corre- sponds to the case where the force detected is equal or larger than the coordination index and the desired motion is in the direction Of increasing force and thus motion is not permitted. The dark line in these plots shows the distance traveled at that 149 Controller Desicion — Discrete Position (cm) X Component— HK l X Component - MSU -20 i -15 ’ -a r T ‘16 ’ -24 l . . _ . . . -17 * E A i -26 - 1 -18 l -28 ’ ‘ -19 r i —30 1 -20 » l A A . . ”If,“ - A - - 10 20 30 40 50 10 20 30 40 50 Event-Based Relerence Event-Based Relerence Y Component- HK Y Component - MSU - r 75 > 78 - 70 . 76 . , .. . ., 65 i i 74 - A i ‘ . _ ‘ 10 20 30 4O 50 10 20 30 40 50 Event-Based Relerence Event-Based Relerence 2 Component- HK Z Component - MSU : . . —L—~—-..._.__.——< “3:3 ' 4 .' :.. .l I ' -30 J -34 l -35 L 4 -35 -40 , l -x > -45 10 20 30 40 so 10 20 30 40 so Event-Based Relerence Event-Based Relerence Figure 7.58: Tele-coordination results of two mobile manipulators moving a metal rod while a human is applying direct forces to the rod. particular event. The task was accomplished successfully and as expected, there is no motion at events where the controller decision is low corresponding to no permitted motion. In other words, changes in position occur only during the controller high state. The only discrepancies are due to measurement errors and noise and they do not exceed a few millimeters. So tele—coordination was accomplished with a prespecified coordination index. Figure 7.58 shows the results using the third scenario with the robots on the same side of the rod and a human applying forces to the rod. The layout and results are similar to the ones presented in Figure 7 .57. Figure 7.59 shows the results from an experiment done according to the third scenario, where the Operators were required to follow the force applied by a local 150 ActualZForceSeneedbyRobott l 1 l l 20 40 60 80 100 120 140 160 180 200 Event—BasedReierence l l Figure 7.59: Two mobile manipulators position tracking tO the force applied by a human on the manipulated Object. human on the object being manipulated. Plots in rows 1 and 3 are the forces in newton detected by each mobile manipulator in the z axis with respect to the event- based reference. The plots in rows 2 and 4 are the corresponding 2 positions of the robots in cm with respect to the event-based reference. As seen for both robots the 2 positions shown in rows 2 and 4 are tracking the forces sensed shown in rows 1 and 3. The longer the force is applied in a certain direction the more significant the motion is for both robots. Therefore, the two mobile manipulators are capable Of tracking external forces while maintaining the 151 required coordination index. The experimental results show that under the control strategy proposed multi- robots can successfully tele—coordinate while maintaining a certain coordination index irrespective of the delay faced. This is also independent Of the configuration used, the external forces applied or the Object being manipulated. 7 .9 Tele-autonomous Control of Robot Formations Despite all the advances in artificial intelligence it is still inferior to human intelli- gence. However, there are situations in which the artificial intelligence excels and can be used to free humans from tedious tasks. Therefore, a combination of both human and machine control is ideal for many scenarios [115]. In addition, the efficiency of Operation can be significantly increased by employing multiple robots in a forma- tion and by enhancing the Operator’s coupling with the environment by supplying supermedia feedback [116]. A general setup of such a system can be seen in Figure 7.60. Several applications from different fields motivate this research [116]. Military applications are some Of the driving forces for this technology. One operator capable Of remotely monitoring and controlling, if needed, several vehicles is a significant part of future combat systems. Another application is search and recovery type of tasks, where one operator is capable Of scanning a wider area by deploying multiple robots in a formation and monitoring the sensory feedback from all of them. Once a “hot spot” is sensed the formation can be converged towards the target using teleoperation. Industrial applications also exist; for instance, inspections of large facilities can be done using formations of robots assisted with human intervention if required. In addition, clean up and surveying type of tasks can be made easier and more efficient using such technology. Combining advantages from different technologies also combines the challenges 152 Environmental , Features 'I Figure 7.60: The general architecture of a tele—autonomous formation control system. Of these technologies. The challenges faced are these of formation control and tele- operation, specifically over the Internet. The challenge of formation control is the establishment of a coordination scheme for heterogeneous robotic systems. The diffi- culty faced in accomplishing this relates to the uncertainties of the environment. In addition, there is the challenge of control transition from autonomous oper- ation to teleoperation and vice versa, which has to be done on the fly with no re- initialization or replanning required. In most existing planning and control schemes for tele—autonomous Operation, autonomy and teleoperation are mutually exclusive modes of operation. That is the system is either in autonomous mode or teleop- eration mode and once it is in teleoperation mode it requires complete replanning tO switch back to autonomous operation. To overcome the difficulties, event-based planning and control is used for the formation and the teleoperation [117]. Tele-autonomous operation, which is considered in this research, combines an- tonomous operation and teleoperation. This approach takes advantage of the speed, repetitiveness and accuracy of autonomous Operation and the flexibility, creativity and ingenuity of the human intelligence. This approach is also known as watch-and- intervene since the operator’s main job is to monitor the formation and take action 153 only in cases where the system faces unexpected situations. This reduces the work load Of the operator and thus increases their attentiveness and efficiency. Combined with formation control, under tele-autonomous operation, one person is able to si- multaneously monitor and control multiple robots. In a tele—autonomous system there are three main modes Of Operation; either the formation is operating completely autonomously, or the formation is sharing control in real-time with the operator, or the Operator has complete control of the Operation. Let the autonomous function Of robot motion be described by Yd(s) and the human generated commands from the joystick be described by Yhd(s), where s is the motion reference parameter. Based on the three modes of Operation discussed previously the resulting command, which is a combination Of the autonomous and the supervisory commands, can be expressed as such: Y"(s) 1613(8) = Yd(s) + Y,,"(s) (7.1) Yhd(3) The first case is pure autonomy and the desired command is based on the au- tonomous plan. In this case no operator intervention is required and thus Yh“(s) = 0. However, once an unexpected event in encountered the system might request the su- pervisor’s help or the supervisors themselves might elect to intervene. This is decided based on the state of the formation and the feedback sensed by the Operator. In the case of an intervention there are two possibilities; either the command is a combina- tion of the autonomous plan and the Operator’s commands or it is entirely based on the supervisory commands. 154 7. 9. 1 System Implementation The formation control was implemented on a mobile robot and mobile manipulator setup, which included teleoperation capabilities. During tele—autonomous Operation, the operator sends velocity commands to the formation and a force is fed back. This force is given by F = K/d (7.2) where d is the shortest distance to obstacles in the direction Of motion and k is a scaling factor. It is clear that the force in this case is inversely proportional to the distance to obstacles. SO when the robot formation gets closer to an object the force sensed increases accordingly. Hardware The hardware architecture Of the formation tele—autonomous Operation system is shown in Figure 7 .61. The system is composed of a mobile manipulator and a mobile robot. The joystick used to control the formation is an MS force feedback joystick. The specification Of the different hardware components is similar to what have been discussed in the previous sections. Software The software developed for this system consists Of three main routines: formation client, mobile/ manipulator server and mobile server. Formation Client: In case human intervention is required, this client sends velocity commands to the formation via the mobile/manipulator server. It is also responsible for receiving force information from the mobile/ manipulator server and 155 3'92, 2’7 7.5.7 ./ .g .. . ... o [it .“ r‘._.’ _ 3‘: Joystick , iii Processmg Uhit .__: . _ _ . . . H. *7" : .] 1:41-73 "fifth. ' “‘9‘“ " “'53:- ; » , ' in t V ' }_ l I(s) Global View Camera Figure 7 .61: The architecture Of the tele—autonomous robot formation system devel- Oped. rendering it. Mobile/Manipulator Server: This server runs on the mobile/manipulator, which is the front robot in the formation. In case there is no human intervention, this server executes autonomously the original desired path. If the human operator intervenes, this server receives velocity commands from the client, forwards these commands to the mobile server, controls the mobile manipulator and feeds back the force information to the client. Control is done according to the general Obstacle avoidance algorithm. Mobile Server: This server runs on the mobile robot, which is the one at the back of the formation. It receives the velocity commands from the mobile/manipulator server and controls the mobile robot. The commands are either autonomously gen- erated or based on the human remote control. Commands are modified in order to maintain a fixed distance between the two robots and to avoid Obstacles. In case this robot is blocked a procedure exists for it to inform the front robot to wait [117]. As with all the experiments, VIC is used to feed back several video streams from the environment. 156 7.9.2 Experimental Results The result of tele-autonomous operation is shown in Figure 7.62, the robots move autonomously in a sine wave formation until an obstacle is detected. At this point the operator intervenes to move the formation away from the obstacle. Plots (a) and (b) of the figure show the trajectories of both robots, plots (c) and (d) show the closest distance detected by the front robot with respect to time and plots (e) and (f) show the forces fed back to the operator in the a: and y directions respectively both with respect to time. These plots show that once an obstacle is detected, that is the distance is closer than a certain predefined value, as shown in plots (c) and (d), the force increases in the direction Opposite to that of motion as shown in plots (e) and (f), where both forces increase as an obstacle is detected. In this case, the operator takes control on the fly and teleoperates the formation away from the obstacle. Once teleoperation is stopped the formation autonomous motion is resumed as seen in plots (a) and (b). These results confirm the advantages of the perceptive planning and control scheme, where autonomous formation control can be established with minimal error, where the formation can handle unexpected events with no need for replanning or resynchronization and where tele—autonomous operation can be established on the fly with no need for control hand over or replanning. 157 (a)Trajectory of Robot R1 teleoperation (b)Trajectory of Robot HZ ............................. autonomous . . tacle 000 i i 1000 2000 3000 4000 (c)Distance to obstaclesfrom Front Robot (d)Distance to obstacles from Front Robot E 1000 E 1000 .5, E. > > o ............................. O autonomous _1000... ........ : ........... _1ooo... bstacle -2000 L A - O 1000 2000 3000 1000 E1000 E g E 500 .. - ' 500 ~ - O L l O 0 50 100 150 O (e)Force along x direction 200 j r 100 2 ' ' i 100 0 50 '- -100 —200 i 1 o 0 50 100 150 0 t(sec) Figure 7.62: Experimental results under tele—autonomous operation with obstacles m u a a o o c - 50 100 150 (OForce along y direction t : A /li i so 100 150 t(sec) 7.10 Supermedia Enhanced Teleoperation Some applications require other types of feedback than force and video. These might required supermedia to be fed back; such as, temperature and pressure. Since the theory developed in this research does not depend on the specific system, then event- based control and planning can be used to design systems that satisfy these require- ments. To illustrate the generality of the theory developed, a supermedia enhanced 158 Remote Site (Hong Kong/Japan) Local Site (Michigan State) Video Video Feedback Feedback <— §——- Temp Feedback Temp Feedback —lmZWm-lZ- Ternperalure ensor Themioelectric Device Force Force Feedback Feedback ‘_____ > ——-——§ Velocity Velocity Commands Commands Figure 7.63: The architecture of a supermedia enhanced teleoperation system. teleoperation system was developed similar to the one shown in Figure 7.63. The operator sends velocity commands, which the mobile manipulator executes. This velocity is divided by the local controller between the arm and the mobile base. This division is done to achieve an acceptable posture for the arm as discussed before [114]. The haptic feedback in this case is actual force detected by the force/torque sensor mounted on the gripper. Also temperature and real-time video are fed back. All the supermedia are rendered to the operator in their original forms. 7.10.1 System Implementation The hardware and software developed for this system are presented in this section. 159 Hardware The experiments were done using a mobile manipulator according to the system ar- chitecture shown in Figure 7.64. The hardware required for the mobile manipulator teleoperation is similar to the one discussed previously. This implementation also included the remote temperature sensing and rendering system. I(s) tm(s) th(S) T(s) Local Joystick Temp Display Rendering Device I(S)T Tm(S)T le(S) TT(s) I Local PC | I(s)T rm ,T(s) I .fi ‘ Force/Torque = Temperature “2% Sensor Sensor . macaw Remote PC Global View Camera Figure 7.64: The architecture of the supermedia enhanced teleoperation system de- veloped. 1 160 Software The programs required for this system are the mobile/ manipulator and temperature server, mobile/ manipulator and temperature client, video server and video client. Mobile/ Manipulator and Temperature Server: This server runs on the robot and is responsible for receiving the velocity commands and deciding the ap- propriate control for the manipulator and the mobile base. It is also responsible for sampling the temperature sensor and the force/torque sensor in order to feedback this information to the mobile/manipulator and temperature client. In addition, it forwards the event number to the video server to be used on the client side for syn- chronization. Mobile/ Manipulator and Temperature Client: This client runs on the re- mote machine. It forwards velocity commands to the server and receives force and temperature information back. The force is rendered with the joystick and the tem- perature is sent to the temperature rendering device connected to the serial port. Also this client forwards the event reference to the video client so that it can be compared with the reference that the frame is tagged with for synchronization purposes. Video Server: This server tags each video frame with the event reference and forwards it to the video client. Video Client: Before this client displays a frame it compares its event reference with the event reference of the force and the temperature currently being rendered. If the frame’s event reference is much less than the force’s and the temperature’s event reference then the frame is discarded. The limit of how old a frame can be and still be displayed is a performance parameter which has to be tuned. 7.10.2 Experimental Results The performance of the system was experimented under no delay and random delay conditions. The mobile manipulator was teleoperated locally from Michigan State 161 2.5 15 2 I II I l ,0 I C .2 .5 l *6 '1? H X >~ .s ‘ .9 0 c s s l ‘g 0.5 g -s | 8 8 o . g g 10 o o u. u. 0.5 .15 .1 ~20 0 so 100 150 zoo 0 so 100 150 zoo Event Event 300 300 2m u 200 g... 7% ri "150 ‘6 100 3 s. j I‘ £100 .2 .— ... 0 g... E '5 I 3 L E 0 I 1" @400 8, .8 6 so on > > -200 -100 150 -300 , 0 so 100 150 200 0 so 100 150 200 Event Event Figure 7.65: Performance of the system while being controlled from Michigan with no considerable delay. University and remotely from the Chinese University of Hong Kong. The Operators were asked to move the robot is any direction and once they detect a force they were asked to command the robot in that direction. The results are shown in Figure 7.65 for the no delay case and in Figure 7.66 for the random delay case. The first row in each figure shows the forces detected in the x and y directions with respect to the 162 8 d m 10 A 5 d O OI a 0' -‘|0 .3 O .5 0| Force detected In x-dIrectIon 0'! Force detected in y-direction . . O s s ' 0 so 100 150 200 100 150 200 Event Event 8 § § § 2 S Velocuty of robot In x—direction é 0 Velocity of robot in y-direction § _ § 0 so 100 150 200 0 so 100 150 200 Event Event Figure 7 .66: Performance of the system while being controlled from Hong Kong with random delay. event reference, the second row gives the actual velocity of the robot in the z and y directions with respect to the event reference. The event-based action reference in this system is taken to be the distance the robot moved. As seen, the velocity of the robot in both cases is responding to the force sim- ilarly. Once a force is applied on the robot as seen in the first rows the velocity 163 changes direction to match that force as seen in the second rows. For example when a force is applied in the y direction the velocity changes to the y direction. Although different operators did these experiments, the performance is similar in both cases. This illustrates the implications of event-transparency, which states that if a system is event-transparent then the control is consistent regardless of random delay. In addition, it is clear in both figures that the control signal, which in this case is the velocity, is a reflection of the most current state of the system, that is the force de- tected. This is seen since there is no difference between the reference at which the force is detected and the reference at which the velocity responds to it. This implies that the system satisfies part of the event-synchronization requirements. As for the event-synchronization requirements that relate to the supermedia being rendered, the results are shown in Figure 7 .67. This figure gives the temperature sensed on the robot, received remotely and rendered to the operator with respect to the event reference in the first column. It also gives the forces detected and received with respect to the event reference in the second column. As seen the received signals are identical to the detected ones with respect to the event reference. Also all the supermedia streams are event-synchronized, which implies that the force and temperature detected at a certain event reference value are rendered at the same event reference value. For example, the force and temperature detected at event reference 37, are rendered to the operator at the same event reference 3". Note that the rendered temperature plotted is slow with respect to the detected temperature. This is due to the fact that the external temperature sensor used to detect the temperature of the rendering device has slow response. Similarly, the video is event-synchronized with the force and the temperature. However, since video feedback is slower than force and temperature feedback, a certain tolerance has to be allowed. So the frame generated at sn can be rendered at any event reference within 3n + N, where N is the accepted tolerance. If the frame does 164 Temperature Detected on Robot (w.r.t. event) 90 85 80 I 75 o 20 4o 60 80 100 9oTemperature Received (w.r.t. event) 85 80 75 o 20 4o 60 80 100 90Temperature Rendered (w.r.t. event 85 f II 80 I 75 0 20 4O 60 80 1 00 165 I “If .i 450 "°° ”v1 -200 Force detected in x-direction (w.r.t. event) 2 o -2 -4 -6 -8 0 20 40 60 80 100 Iggrce Received in x-direction (w.r.t. event) . I .50 HI A I 0 20 4O 60 80 100 figrce detected in y-direction (w.r.t. event) 5 M o. I I XV av} -10 0 20 4O 60 80 100 {agree Received in y-direction (w.r.t. event) .. I“ o I “LI 1 p—‘—-‘I') "‘fi Figure 7.67: Performance of the system while being controlled from Hong Kong with random delay. not arrive within this margin then it is discarded. So what the operator will be seeing cannot be older than N events from what is being felt. This is seen in Figure 7.68, where the first row shows the force and the frame numbers with respect to the events at which they were sampled and the second row shows the force and the frame numbers with respect to the events at which they were rendered. It is shown that the frames were either rendered within 10 events of the event at which they were sampled 50 T fir I r f V 40- 20" f” .I2 .I3 .I4T5I6I7f. IQ‘ 0 5 10 15 20 25 30 35 Detection Event 50 I I I T I I I I 40- I 30 - .. 20 - s 10 - T2 T3 T 4 T5 T6 - o 1 1 1v l l l 1 1 7‘1} 80 O 5 10 15 20 25 30 35 40 45 Rendering Event Figure 7.68: The sampling and rendering of the video frames and the force with respect to the event-based action reference. and that is shown by a bar, or they were discarded and that is depicted by a circle on the axis, such as the ones for frames 1, 7 and 8. This results in the video being event-synchronized with the other supermedia streams with a certain tolerance. The experimental results presented confirmed the analysis made regarding event- transparent event-based control teleoperation systems. This implies that performance of the system is consistent in the face of random time delay. Also the supermedia streams fed back and the control signal are event-synchronized. 166 7.11 Micro Manipulator Micro Mechanical and Electronic Systems (MEMS) technology makes it possible to sense and act in micro-environments [118] [119]. Combining the Internet and MEMS it is possible for humans to sense and act in a remote micro-environment. This new technology has potential impact on several fields; such as, biomedical engineering and manufacturing. However, in order to avoid breaking or damaging micro objects during the manipulation process, force reflection is an essential component within the control structure of micro-manipulation and micro—assembly systems. The supermedia feedback (force and video) supplied in this experiment, link and couple the operator to the environment. This coupling increases the efficiency and safety of manipulation at microscopic levels. It is similar to magnifying not only the visual but also the sensing capabilities of the operator. Therefore, the force, which conveys significant information to the operator with minimal overhead, is a desirable form of supermedia feedback in micro-teleoperation systems. Michigan State Hong Kong Video Video Feedback Feedback {———' ‘—— l N Force E Force Feedback R Feedback N E —> T . Sensor Commands Figure 7.69: The structure of a micro-manipulator teleoperation system. 167 A micro manipulator teleoperation system with force feedback is shown in Figure 7.69 [120]-[122]. This section gives the implementation details and experimental results of such a system. 7.11.1 System Implementation The implementation discussed in this section concentrates on the teleoperation part of the system and does not include much details about the micro manipulator or micro sensor [HUI-[122]. Hardware The hardware architecture of the system is shown in Figure 7.70. The joystick is the MS force feedback similar to the one used in the previous experiments. The hardware includes a camera capable of feeding video from a micro scale. In addition, a PVDF (Polyvinylidence fluoride) micro-tip was used as a piezoelectric sensor for the force [120]—[122]. This tip is about 2.5 millimeter long with about 0.8 millimeter at the triangular base. The output from this sensor is amplified using an inverted amplifier with feedback gain of 50. Its signal is then fed to a 8255 analog-to—digital conversion (ADC) card connected to a PC for signal transmission to the Internet. This experimental setup is located at the Advanced Microsystems Laboratory (AML) of The Chinese University of Hong Kong. The sensor tip is attached to an x-y computer-control positioning table, which can be controlled via the Internet by a force reflection joystick in the Robotics and Au- tomation Laboratory (RAL) at Michigan State University. A cantilever is attached to a vibration drum and has a tip vibration of 100 millimeter to l millimeter from the frequency range of 1H 2 to 120Hz. The AML sensor tip position can be manip- ulated by the RAL joystick to contact the vibrating cantilever. The RAL operator observes the AML tip position using a video conferencing software. The force of the 168 F X Pm P P F—“Fl Human Joystick al Remote Operator I m~I . Tm ‘ I P Fin Local Microscope X-Y PVDF Display Camera Table Sensor Micro Manipulator Figure 7.70: Hardware architecture of the micro-manipulator teleoperation system. vibrating cantilever sensed by the tip is sent to RAL via the Internet. Once the force is received the force feedback joystick renders it. After that the operator generates a new movement command to be sent to the sensor via the Internet. Software The software developed is a motion sever and a client. Motion Server: This server receives the position increment commands from the client in both an and y directions. These are forwarded to the x—y table for execution and at the same time the sensor is sampled for the force in the y direction. The force is then sent to the client. Client: The client translates the joystick position into increment commands and sends these to the server. It also receives the force feedback and renders it to the operator. In addition, real-time video was fed back from the micro-environment using VIC. 7.11.2 Experimental Results The experimental results presented here relate to the testing done between Hong Kong and Michigan State. Figure 7.71 shows a plot of the desired position increments in both directions, in the first two rows, and a plot of the rendered force with respect 169 01 O Desired X increment Desired Y increment Force Played 8 L 0 1 0 20 30 40 50 60 70 Figure 7.71: Plots of the desired position increments and the force felt by the operator. to the event, in the third row. The commands sent are random, which is typical of a teleoperation scenario. This makes approaches based on prediction of forces or virtual forces non-realistic. Therefore, actual force had to be sensed and fed back. Figure 7.72 presents plots of the force felt by the operator in the first row, the force sampled from the sensor in the second row and the error between them in the third row all with respect to the event- based reference. As seen, the force felt is closely following the one sampled from the sensor, which implies that the system is stable. Also the force rendered responds at a close event to the changes in the force sensed, which reflects event-synchronization. 170 Hi ‘Il-‘LI' ' '- ‘. Comparison Between the Force Detected and Force Felt 200 r r r T, r r r r r g I . (1100.. ...... . _, a) 9 Z I £50,. _, 0 A r i i 0 20 30 40 50 100 LocalTime 200 r r r r r E1mp.... r.,....I ............ ....... - .............................. .I O . . U) . . 0100~ -- 8 . . O . . U- 50._ . .. 0 A i 0 20 30 40 50 100 LocaiTime ‘ I I I I I £05,...A ........ .4 0 10 20 30 40 50 70 Figure 7.72: Comparison between the force felt and the one sensed. 7 .12 Summary of Experimental Results In summary, all these experiment show that the event-based planning and control developed as part of this research is a general framework for the design of real-time supermedia enhanced teleoperation systems. This framework is independent of the specific robot used, the human operator, the environment model or the communica- tion medium. In addition, the experimental results reveal that event-based controlled real-time teleoperation systems maintain stability in the face of random delay. Also these sys- tems are event-transparent, which offers consistency in the performance, and event- synchronized, which increases efficiency and safety. So this framework ensures syn- chronization between the operator and the robot and between the different supermedia streams being rendered. The next chapter summarizes this dissertation, gives concluding remarks and de- tails the future work. 171 CHAPTER 8 SUMMARY, CONCLUSIONS AND FUTURE WORK This chapter gives a summary of this document. It also gives concluding remarks and observations regarding the research conducted. Then additional improvements to the approach and future work will be suggested. 8. 1 Summary This dissertation started by motivating Internet-based teleoperation and Specifically these that include supermedia feedback. It also discussed the many challenges and difficulties faced in designing and implementing such systems. These challenges that include random delay, human-machine interfacing, non-determinism, data loss and others. Also a detailed literature review was given that covered most aspects of this research. Then the Internet as a real-time control media was examined. The characteristics of Internet communication and their effects on teleoperation systems performance were covered. Effects such as instability, loss of transparency and desynchronization. This was followed by the contributions of the research done, which can be di- vided into theoretical and experimental. The theoretical contributions included the following: o Event-based planning and control for real-time bilateral teleoperation systems was introduced. This control method was analyzed in terms of stability. Also event-transparency and event-synchronization concepts were developed and their implications analyzed. It was shown that event-based teleoperation systems, under certain conditions, satisfy stability, event-transparency and event syn- chronization regardless of the randomness of delay. 172 o The modeling of event-based bilateral teleoperation systems using dynamic equations was detailed. Also, Petri Nets were proposed as a modeling and analysis tool for such systems. Then these modeling and design approaches were compared with another popular method in the literature. a The design methodologies that guarantee stability, event-transparency and event- synchronization for bilateral teleoperation systems were included. A new design approach based on Petri Nets was developed for event—synchronization. o The importance of the developed concepts were illustrated by studying bilat- eral tele—coordination of multi-robots. The concept of coordination index for bilateral teleoperation systems was developed. The design methodology for establishing tele—coordination with a certain small index was detailed. o All the theory developed is independent of the time delay faced and applies to a wide range of systems operating in unknown environments with different human operators, tasks and types of feedback. All the methods do not require significant overhead and are easy to implement. As for the experimental contributions they included the following: 0 Several different event-based bilateral teleoperation systems were implemented. To the best of our knowledge these were one of a kind systems that have not been developed previously. 0 Several types of robots ranging from mobile manipulators to micro manipulators were bilaterally controlled with different types of supermedia feedback, which included haptic, temperature and video. To the best of our knowledge this was the first time haptic and temperature information were fed back via the Internet. 173 0 Experiments were carried out via the Internet using test beds that included The United States, Hong Kong and Japan. 0 The experimental results confirmed all the theory developed and illustrated its benefits in the face of random nondeterministic delay. 8.2 Conclusions Teleoperation is a very attractive and quickly growing research field. But this tech- nology faces many difficulties. The most important one, which will always exist, is time delay. At first thought this might seem as a trivial issue not affecting the system performance. But it was shown that this delay would render the system unstable and degrades the system performance, especially when supermedia (sensory information: video, audio, haptic, temperature and others) is fed back. The work found in the literature has several limitations when it comes to su- permedia enhanced teleoperation under nondeterministic delay conditions. Those limitations mainly relate to assumptions made regarding the delay; some take the delay to be constant others do not even consider delay. This research extended event-based control in order to apply for supermedia en- hanced real-time teleoperation systems. It was shown theoretically and experimen- tally that this approach results in a stable teleoperation system regardless of the time delay encountered. In addition, the notions of event-transparency and event- synchronization were developed. The theory and experimentation revealed the impli- cations of these performance properties. Implications which include consistency and efficiency in the control despite random delay. Moreover, modeling for such systems was accomplished using Petri Nets, which were also used for analysis and design. Detailed design procedures for stability, event- transparency and event-synchronization were given. The importance of those char- 174 acteristics is illustrated using the concept of tele—coordination. It was shown that tele-coordination under certain performance conditions can be achieved by event- transparent and event—synchronous systems. The major contribution of this research is the generality of the theory developed. The approach develOped and the analysis done are independent of the specific system being controlled. Also they are independent of the environment model, the delay model and the human operator. In addition, this approach does not require any over-head and is easy to implement. Therefore, as shown by the several different experimental systems developed, any type of application can adopt this approach. Micro or macro scale applications can use event-based control for supermedia enhanced real-time teleoperation via the Internet or any communication medium. To the best of our knowledge, these are the first such supermedia enhanced teleoperation systems to be controlled in real-time via the Internet. This work will have an impact on the fields of robotics, communication, control and multimedia. It gives new modeling, analysis and design approaches that can be adapted for any of these fields. In addition, new measures such as event-transparency and event-synchronization compose an innovative way for evaluating performance. As for teleoperation specifically, this research elevates the level of hope for the feasibility of Internet-based real-time bilateral control systems. Since the research carried out is technology independent and since most of the issues investigated, such as delay, will not vanish in the foreseen future, this research will not become obsolete in the face of technological advances. 175 8.3 Future Work The future work suggested relates to some topics researched as part of this project and also to some topics that were not investigated. The following are the main suggested topics: 0 Additional work should be done relating to transparency from the operator’s perspective. Examining how realistic the supermedia feels compared to direct interaction with the environment. This measure should be objective and easy to evaluate. o The quality of service should be studied. Specifically, developing measures for quality of service in real-time teleoperation systems and investigating the kind of quality of service that can be guaranteed. a This field has much potential for the use of Neural Networks. Many parameters can be adapted using Neural Networks to better match the specifics of the system. A changing event-based reference can be used to adapt to a changing system or environment. 0 The use of wireless application protocol WAP should also be investigated in the remote sensing and control context [123]-[125]. Since wireless communication is a field growing at a fast pace, there is great interest in robotics applications based on WAP. 176 BIBLIOGRAPHY 177 [1] I2] [3] [4] [5] I6] [7] I8] BIBLIOGRAPHY W. Kim, B. Hannaford, and A. Bejczy, “Force—Reflection and Shared Compliant Control in Operating Telemanipulators with Time Delay,” IEEE Transactions on Robotics and Automation, Vol. 8, April 1992. M. Otsuka, N. Matsumoto, T. Idogaki, K Kosuge, T. Itoh, “Bilateral Tele- manipulator System With Communication Time Delay Based on Force-Sum- Driven Virtual Internal Models,” IEEE International Conference on Robotics and Automation, pp. 344—350, 1995. G. Leung, B. Francis, J. Apkarian, “Bilateral Controller for Teleoperators With Time Delay via u-Synthesis,” IEEE Transactions on Robotics and Automation, Vol 11, No. 1, February 1995. R. Anderson, M. Spong, “Asymptotic Stability for Force Reflecting Teleoperators with Time Delay,” The International Journal of Robotics Research, Vol. 11, April 1992. G. Niemeyer, J. Slotine, “Stable Adaptive Teleoperation,” IEEE Journal of Oceanic Engineering, Vol 16, No. 1, January 1991. I. Elhajj, N. Xi, Y. Liu, “Real-Time Control of Internet Based Teleoperation with Force Reflection,” IEEE International Conference on Robotics and Automation, San Francisco, 2000. J. H. Ryu, D. S. Kwon, B. Hannaford, “Stable Teleoperation with Time Domain Passivity Control,” IEEE International Conference on Robotics and Automation, 2002. K. Park, W. K. Chung, Y. Youm, “Obtaining Passivity of Micro-Teleoperation Handling Small Inertia Object,” IEEE International Conference on Robotics and Automation, 2002. 178 film; [9] [10] [11] I12] I13] I14] [15] [15] D. Lawrence, “Stability and Transparency in Bilateral Teleoperation,” IEEE Transactions on Robotics and Automation, Vol. 9, No. 5, October 1993. J. Speich, K. Fite, M. Goldfarb, “A Method for Simultaneously Increasing Trans- parency and Stability Robustness in Bilateral Telemanipulation,” IEEE Interna- tional Conference on Robotics and Automation, 2000. J. Speich, M. Goldfarb, “Implementation of Loop-Shaping Compensators to In- crease the Transparency Bandwidth of a Scaled Telemanipulation System,” IEEE International Conference on Robotics and Automation, 2002. H. F lemmer, B. Eriksson, J. Wikander, “Control Design for Transparent Tele- operators with Model Parameter Variation,” IEEE International Conference on Robotics and Automation, 2002. T. Brooks, “Telerobotic Response Requirements,” IEEE Conference on Systems, Man and Cybernetics, pp. 113-120, 1990. S. Salcudean, M. Zhu, W. Zhu, K. Hashtrudi-Zaad, “Transparent Bilateral Teleoperation under Position and Rate Control,” The International Journal of Robotics Research, Vol. 19, December 2000. C. Yang, J. Huang, “A Real-Time Synchronization Model and Transport Pro- tocol for Multimedia Applications,” Proceedings of IEEE INFOCOM, Canada, June 1994. T. Znati, R. Simon, B. Field, “A Network-Based Scheme For Synchronization of Multimedia Streams,” Proceedings of the WorkshOp on Perspectives on Multi- media Synchronization, International Conference on Multimedia Computing and Systems, May 15-19, 1995, Washington 179 I17] I18] [19] I20] I21] [2?] I23] [24] T. Little, A. Ghafoor, C. Chen, C. Chang, P. Berra, “Multimedia Synchroniza- tion,” IEEE Data Engineering Bulletin, Vol. 14, N0. 3, Sep. 1991, pp. 26-35. P. Zarros, M. Lee, T. Saadawi, “Statistical Synchronization Among Participants in Real-Time Multimedia Conference,” INFOCOM, pp.912-919, Canada, 1994. C. Liu, M. Lee, Y. Xie, T. Saadawi, “Multimedia Multipoint Teleconference Sys- tem with Adaptive Synchronization,” IEEE Journal of Selected Area on Com- munications, vol. 14, no. 7, pp. 1422-1435, Sept. 1996. Kurose, Jim and Keith Ross, Computer Networking: A Top-Down Approach Featuring the Internet, MA, USA: Addison-Wesley, 2000. D. Kwon, K. Y. Woo, H. S Cho, “Haptic Control of the Master Hand Con- troller for a Microsurgical Telerobot System,” IEEE International Conference on Robotics and Automation, Vol. 3, pp. 1722-1727, May 1999. M. Tanimoto, F. Arai, T. Fukuda, and M. Negoro, “Force Display Method to Improve Safety in Teleoperation System for Intravascular Neurosurgery,” IEEE International Conference on Robotics and Automation, Vol. 3, pp. 1728-1733, hday 1999. M. Ghodoussi, S. Butner, Y. Wang, “Robotic Surgery —The Transatlantic Case,” IEEE International Conference on Robotics and Automation, DC, 2002. R. Luo, W. Z. Lee, J. H. Chou, and H. T. Leong, “Tele-Control of Rapid Proto- typing Machine Via Internet for Automated Tele-Manufacturing,” IEEE Inter- national Conference on Robotics and Automation, Vol. 3, pp. 2203-2208, May 1999. 180 I25] I26] I27] I28] I29] [30] I31] [32] P. G. Backes, K. S. T30, and G. K. Tharp, “Mars Pathfinder Mission Internet- Based Operations Using WITS,” Workshop on Internet Robotics IEEE Interna- tional Conference on Robotics and Automation, pp. 284—291, May 1999. L. Hsu, R. Costa, F. Lizarralde, J. Scares, “Passive Arm Based Dynamic Po- sitioning System for Remotely Operated Underwater Vehicles,” IEEE Interna- tional Conference on Robotics and Automation, Vol. 1, pp. 407-412, May 1999. G. Hirzinger, B. Brunner, J. Dietrich, and J. Heindl, “Sensor-Based Space Robotics-ROTEX and Its Telerobotic Features,” IEEE Transactions on Robotics and Automation, Vol. 9, No. 5, October 1993. S. E. Everett, R. V. Dubey, “Model-Based Variable Position Mapping for Teler- obotic Assistance in a Cylindrical Environment,” IEEE International Conference on Robotics and Automation, Vol. 3, pp. 2197-2202, May 1999. Robin Murphy, “Human-Robot Interaction in Robot-Assisted Urban Search and Rescue,” Workshop on Human-Robot Interaction IEEE International Conference on Robotics and Automation, DC, 2002. Gerard McKee, “The development of Internet-Based Laboratory Environments for Teaching Robotics and Artificial Intelligence,” International Conference on Robotics and Automation, DC, 2002. K. Goldberg, D. Song, Y. Khor, D. Pescovitz, A. Levandowski, J. Himmelstein, J. Shih, A. Ho, E. Paulos, J. Donath, “Collaborative Online Teleoperation with Spatial Dynamic Voting and a Human “Tele—Actor”,” International Conference on Robotics and Automation, DC, 2002. R. Anderson, M. Spong, “Bilateral Control of Teleoperators with Time Delay,” IEEE Transactions on Automatic Control, Vol. 34, No. 5, May 1989. 181 [33] [34] I35] I36] I37] I38] I39] [40] L. Williams, R. Loftin, H. Aldridge, E. Leiss, W. Bluethmann, “Kinesthetic and Visual Force Display for Telerobotics,” IEEE International Conference on Robotics and Automation, DC, 2002. S. Inc, S. Shimizu, T. Odagawa, M. Sato, “A Tactile Display for Presenting Quality of Materials by Chnaging the Temperature of Skin Surface,” IEEE In- ternational Workshop on Robot and Human Communication, 1993. D. Caldwell, C. Gosney, “Enhanced Tactile Feedback (Tele-Taction) using a Multi-Functional Sensory System,” IEEE International Conference on Robotics and Automation, Atlanta, May 1993. L. Conway, R. Volz, M. Walker, “Teleautonomous Systems: Projecting and Co- ordinating Intelligent Action at a Distance,” IEEE Transactions on Robotics and Automation, Vol. 6, No. 2, April 1990. G. Hirzinger, K. Landzettel, Ch. Fagerer, “Telerobotics with Large Time Delays- T he ROTEX Experience,” IEEE/RSJ International Conference on Intelligent Robots and Systems, Munich, Germany, pp. 571-578, September 1994. A. Bejczy, W. Kim, S. Venema, “The Phantom Robot: Predictive Displays For Teleoperation with Time Delay,” IEEE International Conference on Robotics and Automation, pp. 546-551, Cincinnati, May 1990. R. Riedi, M. Course, V. Ribeiro, R. Baraniuk, “A Multifractal Wavelet Model with Application to TCP Network Traffic,” IEEE Transactions on Information Theory (Special Issue on Multiscale Signal Analysis and Modeling), pp. 992-1018, April 1999. W. Leland, M. Taqqu, W. Willinger, D. Wilson, “On the Self-Similar Nature of Ethernet Traffic (Extended Version),” IEEE/ACM Transactions on Networking, Vol. 2, No. 1, February 1994. 182 [41] M. Mitsuishi, T. Hori, T. Nagao, “Predictive, Augmented and Transformed In- formation Display for Time Delay Compensation in Tele-handling/ Machining,” IEEE International Conference on Robotics and Automation, pp 45-52, 1995. [42] T. Sheridan, “Space Teleoperation Through Time Delay: Review and Prognosis,” IEEE Transactions on Robotics and Automation, Vol 9, No. 5, October 1993. [43] J. Nilsson, B. Bernhardsson, “Stochastic Analysis and Control of Real-Time Systems with Random Time Delays,” Automatica, Vol. 34, pp. 57-64, 1998. [44] C. Lawn, B. Hannaford, “Performance testing of Passive Communication and Control in Teleoperation with Time Delay,” IEEE International Conference on Robotics and Automation, Vol. 3, pp. 776-781, Atlanta, May 1993. [45] B. Hannaford, W. Kim, “Force Reflection, Shared Control, And Time Delay in Telemanipulation,” IEEE International Confernece on Systems, Man, and Cybernetics, Cambridge, November 1989. [46] J. Klamka, “Observer for Linear Feedback Control of Systems With Distributed delays in Controls and Output,” Systems and Control Letters, Vol. 1, 1982. [47] K. Watanbe, M. Ito, “An Observer for Linear Feedback Control Laws of mu]- tivariable Systems with Multiple Delays in Control and Output,” Systems and Control Letters, Vol. 1, No. 1, July 1981. [48] E. Altman, T. Basar, R. Srikant, “Multi-User Rate-Based Flow Control with Ac- tion Delays: A Team-Theoretic Approach,” Proceedings of the 36‘“ Conference on Decision and Control, San Diego, December 1997. [49] E. Altman, T. Basar, “Multi-User Rate-Based Flow Control: Distributed Game- theoretic Algorithms,” Proceedings of the 36‘” Conference on Decision and Con- trol, San Diego, December 1997. 183 I50] [51] I52] I53] [54] [55] I56] I57] [58] U. Madhow, “Dynamic Congestion Control and Error Recovery Over a Hetero- geneous Internet,” Proceedings of the 36‘” Conference on Decision and Control, San Diego, December 1997. Peter Liu, Max Meng, Xiufen Ye, Jason Gu, “End-to—End Delay Boundary Pre- diction Using Maximum Entropy Principle (MEP) for Internet-Based Teleopera- tion,” IEEE International Conference on Robotics and Automation, DC, 2002. Tissaphern Mirfakhrai, Shahram Payandeh, “A Delay Prediction Approach for Teleoperation over the Internet,” IEEE International Conference on Robotics and Automation, DC, 2002. K. Brady, T. J. Tarn, “Internet-Based Remote Teleoperation,” IEEE Interna- tional Conference on Robotics and Automation, Leuven, Belgium, May 1998. D. Pai, “ACME, A Telerobotic Measurement Facility for Reality-Based Mod- elling on the Internet,” IROS Workshop on Robots on the Web, Canada, 1998. R. Simmons, “Xavier: An Autonomous Mobile Robot on the Web,” IROS Work- shop on Robots on the Web, Canada, 1998. N. Y. Chong, T. Kotoku, K. Ohba, K. Komoriya, “Remote Coordinated Controls in Multiple Telerobot Cooperation,” IEEE International Conference on Robotics and Automation, San Francisco, April 2000. L. F. Penin, K. Matsumoto, S. Wakabayashi, “Force Reflection for Time-delayed Teleoperation of Space Robots,” IEEE International Conference on Robotics and Automation, San Francisco, April 2000. M. Stein, “Painting on the World Wide Web: The PumaPaint Project,” IROS Workshop on Robots on the Web, Canada, 1998. 184 [59] I50] I51] I62] [63] [64] I65] I66] I67] [68] P. Saucy, F. Mondada, “KhepOnTheWeb: One Year of Access to a Mobile Robot on the Internet,” IROS Workshop on Robots on the Web, Canada, 1998. V. Paxson and S. Floyd, “Wide Area Traffic—The Failure of Poisson Modeling,” IEEE/ACM Transactions on Networking, 1995. J. Beran, R. Sherman, M.S. Taqqu, W. Willinger, “Long-Range-Dependence in Variable-bit-rate Video Traffic,” IEEE Transactions on Communication, 1995. ME. Crovella, A. Bestavros, “Self-similarity in World Wide Web Traffic: Evi- dence and Possible Causes,” IEEE/ACM Transactions on Networking, 1997. J. Gao, I. Rubin, “Analysis of Random Access Protocol under Bursty Traffic,” Proceedings of IF IP / IEEE International Conference on Management of Multi- media Networks and Services, Chicago, 2001. T. J. Tarn, N. Xi, A. Bejczy, “Path-Based Approach to Integrated Planning and Control for Robotic Systems,” Automatica, Vol. 32, No. 12, pp. 1675-1687, 1996. J. Tan, N. Xi, T. J. Tarn, “Non-Time Based Tracking Controller for Mobile Robots,” IEEE Canadian Conference on Electrical and Computer Engineering, Edmonton Canada, 1999. N. Xi, T. J. Tarn, “Action Synchronization and Control of Internet Based Teler- obotic Systems,” IEEE International Conference on Robotics and Automation, Vol.1, pp. 219-224, Detroit, May 1999. N. Xi, “Event-Based Planning and Control for Robotic Systems,” Doctoral Dis- sertation, Washington University, December 1993. I. Elhajj, N. Xi, W. K. Fung, Y. H. Liu, Y. Hasegawa, T. Fukuda, “Modeling and Control of Internet Based Cooperative Teleoperation,” IEEE International Conferemce on Robotics and Automation, Korea, 2001. 185 [69] Imad Elhajj, Ning Xi, Wai Keung Fung, Yun hui Liu, Wen J. Li, Tomoyuki Kaga, Toshio Fukuda, “Haptic Information in Internet-Based Teleoperation,” IEEE/ASME Transactions on Mechatronics, Vol. 6, No. 3, September 2001. [70] Imad Elhajj, Jindong Tan, Ning Xi, Wai Keung Fung, Yun Hui Liu, To- moyuki Kaga, Toshio Fukuda, “Multi-Site Internet-Based Cooperative Control of Robotic Operations,” IEEE/RSJ International Conference on Intelligent Robots and Systems, Japan, 2000. [71] A. Papoulis, “Error Analysis in Sampling Theory,” IEEE Proceedings, Vol. 54, July 1966. [72] M. Unser, “Sampling —50 Years After Shannon,” IEEE Proceedings, Vol. 88, April 2000. [73] N. Hogan, “Multivariable Mechanics of The N euromuscular System,” IEEE Eight Annual Conference of the Engineering in Medicine and Biology Society, 1986. [74] Y. Zheng, “Human-Robot Coordination for Moving Large Objects,” Workshop Note, International Conference on Robotics and Automation, 1997. [75] T. Milner, “Human Operator Adaptation to Machine Instability,” Advances in Robotics, Mechatronics, and Haptic Interfaces, DSC-Vol. 49, ASME 1993. [76] E. Todosiev, R. Rose, L. Summers, “Human Performance in Single and Two-Axis Tracking Systems,” IEEE Transactions on Human Factors in Electronics, Vol. HFE—8, No. 2, June 1967. [77] G. Bekey, H. Meissinger, R. Rose, “Mathematical Models of Human Operators in Simple Two-Axis Manual Control Systems,” IEEE Transactions on Human Factors in Electronics, September 1965. 186 [78] H. Tan, M. Srinivasan, B. Eberman, B. Cheng, “Human Factors For the Design of F orce-Reflecting Haptic Interfaces,” Dynamic Systems and Control, DSC-Vol. 55-1, 1994. [79] J. Tan, N. Xi, “Hybrid System Design for Singularityless Task Level Robot Controllers,” IEEE International Conference on Robotics and Automation, 2000. [80] Canudas de Wit, Carlos, Bruno Siciliano and Georges Bastin, Theory of Robot Control, England: Springer-Verlag, 1996. [81] Slotine, Jean-Jacques and Weiping Li, Applied Nonlinear Control, NJ, USA: Prentice-Hall, 1991. [82] Khalil, Hassan, Nonlinear Systems, NJ, USA: Prentice-Hall, 1996. [83] Tadao Murata, “Petri Nets: Properties, Analysis and Applications,” Proceedings of the IEEE, V01. 77, N0. 4, April 1989. [84] A. Moro, H. Yu, G. Kelleher, “Advanced Scheduling Methodologies for Flexible Manufacturing Systems using Petri Nets and Heuristic Search,” IEEE Interna- tional Conf. on Robotics and Automation, San Francisco, pp.2398-2403, Arpil 2000. [85] Baccelli, F ., G. Cohen, G. Olsder and J. Quadrat, Synchronization and Linearity, England: John Wiley and Sons, 1992. [86] Peterson, James, Petri Net Theory and the Modeling of Systems, NJ, USA: Prentice-Hall, 1981. [87] M. Song, “Integration of Task Scheduling, Sensing, Planning and Control in a Robotic Manufacturing Work-Cell,” Doctoral Dissertation, Washington Univer- sity, August 1997. 187 [88] I89] I90] [91] I92] [93] [94] [95] M. Rezai, M.R. Ito and RD. Lawrence, “Modelling and Simulation of Hybrid Control Systems by Global Petri Nets,” IEEE International Symposium On Cir- cuits and Systems, Seattle, 1995. M. Gotesman, N. LopezBenitez, “Petri Netbased Modeling of Hybrid Dynamic Systems,” IEEE Conference on Emerging Technologies and Factory Automation, Hawaii, November 1996. G. Nenninger, V. Krebs, “Modeling and Analysis of Hybrid Systems: A New Ap- proach Integrating Petri Nets and Differential Equations,” IEEE Joint WorkshOp on Parallel and Distributed Real-Time Systems, 1997. Tadao Murata, Zhehui Wu, “Fair Relation and Modified Synchronic Distances in a Petri Net,” Journal of the Franklin Institute, Vol. 320, No,2, pp. 63-82, August 1985. J. Moody, K. Yamalidou, M. Lemmon, P. Antsaklis, “Feedback Control of Petri Nets Based on Place Invariants,” Proceedings of the 33rd Conference on Decision and Control, FL. Dec. 1994. L. E. Holloway, B. H. Krogh, “Controlled Petri Nets: A Tutorial Survey,” Lec- ture Notes in Computer Science, v. 199, Eleventh International Conference on Analysis and Control, Discrete Event Systems, June 1994. J. Moody, P. Antsaklis, M. Lemmon, “Automated Design of a Petri Net Feed- back Controller for a Robotic Assembly Cell,” IEEE Symposium on Emerging Technologies and Factory Automation, Vol. 2, pages 117-128. October 1995. M. Uchiyama, “A Unified Approach to Load Sharing, Motion Decomposing and Force Sensing of Dual Arm Robots,” Robotic Research: The Fifth International Symposium, MIT Press, 1990. 188 [96] Y. F. Zheng, J .Y. S. Luh, “Optimal Load Distribution for Two Industrial Robots Handling a Single Object,” Int. Conf. on Robotics and Auto., 1988. [97] K. Kosuge, T. Oosumi, H. Seki, “Decentralized Control of Multiple Manipulators Handling an Object in Coordination Based on Impedance Control of Each Arm,” IEEE/RSJ Int. Conference on Intelligent Robots and Systems, France, 1997. [98] Z. Wang, E. Nakano, T. Matsukawa, “Realizing Cooperative Object Manipula- tion using Multiple Behavior-Based Robots,” IEEE/RSJ International Conf. on Intelligent Robots and Systems, 1996. [99] N. Y. Cheng, T. Kotoku, K. Ohba, K. Komoriya, K. Tanie, J. Oaki, H. Hashimoto, F. Ozaki, K. Maeda, N. Matsuhira, “A Collaborative Multi-site Tele- operation over an ISDN,” Mechatronics Journal, in press. [100] N. K. Chong, T. Kotoku, K. Ohba, K. Tanie, “Virtual Repulsive Force Field Guided Coordination for Multi-telerobot Collaboration,” IEEE International Conf. on Robotics and Auto., Korea 2001. [101] A. Kheddar, C. Tzafestas, P. Coiffet, T. Kotoku, S. Kawabata, K. Iwamoto, K. Tanie, I. Mazon, C. Laugier, R. Chellali, “Parallel Multi-Robots Long Distance Teleoperation,” IEEE International Conference on Advanced Robotics, Califor- nia, 1997. [102] T. Suzuki, T. Fujii, K. Yokota, H. Asama, H. Kaetsu, I. Endo, “Teleoperation of Multiple Robots through the Internet,” IEEE Inter. Workshop on Robot and Human Communication, 1996. [103] David Williams, Oussama Khatib, “The Virtual Linkage: A Model for Internal Forces in Multi-Grasp Manipulation,” IEEE International Conf. on Robotics and Auto., Atlanta, Georgia, 1993. 189 [104] Amer alYahmadi, T. C. Hsia, “Internal F orce-Based Impedance Control of Dual- arm Manipulation of Flexible Objects,” IEEE Inter. Conf. on Robotics and Au- tomation, San Francisco 2000. [105] A. Ramadorai, T. J. Tarn, A. Bejczy, N. Xi, “Task-Driven Control of Multi- Arm Systems,” IEEE Transactions on Control Systems Technology, Vol. 2, No. 3, September 1994. [106] Imad Elhajj, Ning Xi, BooHeon Song, Meng—Meng Yu, Wang Tai Lo, Yun Hui Liu, “Transparency and Synchronization in Supermedia Enhanced Internet- Based Teleoperation,” Inter. Conf. on Robotics and Auto., DC. 2002. [107] Nomad XRDEV Software Manual. Release 1.0. Nomadic Technologies, Inc., 1999. [108] H. Kazerooni, “Human-Robot Interaction via the Transfer of Power and Infor- mation Signals,” IEEE Transactions on Systems, Man and Cybernatics, 1990. [109] Ning Xi, T. J. Tarn, “Heterogeneous Function-Based Human/ Robot Coopera- tions,” IEEE International Conference on Robotics and Automation, Belgium, May 1998. [110] J. Luh, Y. Zheng, “Constrained Relations Between Two Coordinated Industrial Robots for Motion Control,” International Journal Robot. Res, pp.60-70, 1987. [111] M. Raibert, J. Craig, “Hybrid Position/ Force Control of Manipulators,” Trans- actions of ASME, 1981. [112] Ning Xi, T.J. Tarn, A. Bejczy, “Intelligent Planning and Control for Multirobot Coordination: An Event-Based Approach,” IEEE Transactions on Robotics and Automation, Vol.12, 1996. 190 [113] K. Ohba, S. Kawabata, N. Y. Cheng, K. Komoriya, T. Matsumaru, N. Mat— suhira, K. Takase, K. Tanie, “Remote Collaboration Through Time Delay in Mul- tiple Teleoperation,” IEEE/RSJ International Conference on Intelligent Robots and Systems, Korea, 1999. [114] J. Tan, N. Xi, “Integrated Sensing and Control of Mobile Manipulators,” IEEE/RSJ International Conference on Intelligent Robots and Systems, Hawaii, December 2001. [115] T. J. Tarn, N. Xi, C. Guo, A. Bejczy, “Human/machine Sharing Control for Telerobotic Systems,” Intelligent Robots and Systems, 1995. [116] T. Balch, R.C. Arkin, “Behavior-based formation control for multirobot teams,” IEEE Trans. Robotics and Automation, Vol. 14, pp. 926-939, 1998. [117] Imad Elhajj, Jindong Tan, Yu Sun and Ning Xi, “Supermedia Enhanced Hu- man/ Machine Cooperative Control of Robot Formations,” IEEE/RSJ Interna- tional Conference on Intelligent Robots and Systems, Switzerland, 2002. [118] H. Kobayashi, H. Nakamura, “A Scaled Teleoperation,” IEEE International WorkshOp on Robot and Human Communication, 1992. [119] K. Kaneko, H. Tokashiki, K. Tanie, K. Komoriya, “Macro-Micro Bilateral Tele- operation Based on Operational Force Feedforward,” IEEE/RSJ international Conference on Intelligent Robots and Systems, Canada, 1998. [120] King W. C. Lai, Carmen K. M. Fung, Wen J. Li, Imad Elhajj, Ning Xi, “Trans- mission of Multimedia Information on Micro Environment via Internet,” Inter- national Conference of the IEEE Industrial Electronics Society, Nagoya Japan, 2000. 191 [121] Antony W. T. Ho, Wen J. Li, Imad Elhajj, Ning Xi, “Development of a Bone Reaming System Using Micro Sensors for Internet Force-Feedback Control,” Workshop on Service Automation and Robotics, Hong Kong, June 2000. [122] Carmen K. M. Fung, King W. C. Lai, Wen J. Li, Yunhui Liu, Imad Elhajj, and Ning Xi, “Sensing and Action in a Micro Environment via Internet,” Inter- national Conference on Information Society in the 2lst Century, Aizu, Japan, November 2000. [123] Terrence Fong, Charles Thorpe, Charles Baur, “Advanced Interfaces for Vehi- cle Teleoperation: Collaborative Control, Sensor Fusion Displays and Remote Driving Tools,” Autonomous Robots, pp. 75-85, 2001. [124] Nikitas Sgouros, Stelios Gerogiannakis, “Integrating WAP-based Wireless De- vices in Robot Teleoperation Environments,” IEEE International Conference on Robotics and Automation, DC, 2002. [125] Pablo d’Angelo, Peter Corke, “Using a WAP Phone as Robot Interface,” IEEE International Conference on Robotics and Automation, DO, 2002. 192