OBJECT LOCATION AND SHAPE ESTIMATION USING A MIMO FMCW MILLIMETER WAVE SENSOR By Yun Lou A THESIS Submitted to Michigan State University in partial fulfillment of the requirements for the the degree of Electrical Engineering - Master of Science 2018 ABSTRACT OBJECT LOCATION AND SHAPE ESTIMATION USING A MIMO FMCW MILLIMETER WAVE SENSOR By Yun Lou Autonomous systems such as self-driving cars and drones are going to transform our lives in many ways. One fundamental element of these systems is the accurate and robust sensing of the sur- rounding environments. Current solutions involve a multi-modal approach where varieties of sen- sors are utilized to achieve the best performance. Among all the sensing modalities, millimeter wave radar is playing a critical role due to its high resolution and the capability of sensing in the dark environment. As the cost, size and weight of millimeter wave radars decrease prominently in recent years, there is a significant opportunity to widely adopt it for a variety of sensing tasks. This thesis presents a MIMO FMCW millimeter wave radar system at 77-81 GHz band. It is a small-size and high-accuracy solution for sensing objects in the environment. Our system is designed to estimate the object’s surface shape, orientation, curvature, boundaries, and 2D location by detecting the object’s surface at multiple positions on a planned trajectory. We evaluated our system on a metal rectangular box and a metal cylinder. The experiment results show that our system is able to determine the surface type (planar or curved) correctly and achieve cm-level accuracy on the boundaries and location estimation. The orientation error is bounded to 2.14◦. The curvature estimation is highly accurate for the middle part of the curved surface. ACKNOWLEDGEMENTS I would like to express my deepest gratitude to my advisor, Dr. Mi Zhang, for his patient guidance, caring and advice. I really enjoy the time when I did the research in Dr. Zhang’s lab. I would like to thank Dr. Jeffrey Nanzer and Dr. Ahmet Cagri Ulusoy for serving on my thesis committee. They gave me some good advice for this thesis work. In addition, I would like to thank my friends, Biyi Fang and Yu Zheng, for many helpful discussions. Finally, I would like to thank the great support of my parents. iii TABLE OF CONTENTS LIST OF TABLES . . LIST OF FIGURES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vi . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii . . . . . . . . . . . . . CHAPTER 1 INTRODUCTION . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.1 Background and Motivations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2 Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3 Thesis Organization . CHAPTER 2 BACKGROUND . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1 2 2 3 3 3 3 6 7 9 2.3 Synthetic Aperture Radar . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 CHAPTER 3 RELATED WORK . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 . 13 . 14 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1 Millimeter Wave . 2.2 Fundamentals of FMCW . 2.2.1 Range Estimation . 2.2.2 Range Resolution . 2.2.3 Angle of Arrival Estimation . . . . . . . . . . . . . . . . . . . . . . . . 2.2.4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Field of View . 3.1 FMCW Sensing . . 3.2 mmWave Sensing . . . . . . . . CHAPTER 4 SYSTEM OVERVIEW . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 CHAPTER 5 SYSTEM DETAILS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 5.1 Hardware Platform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 5.2 Chirp Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 5.3 Chirp Transmission . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 5.4 Data Preprocessing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 5.5 Distance Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 5.6 AoA Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 . 5.7 Distance Calibration . 5.8 Object Location and Shape Estimation . . . . . . . . . . . . . . . . . . . . . . . . 25 . . . . . . . . . . . . . . . CHAPTER 6 SYSTEM EVALUATION . . . . . . . . . . . . . . . . . . . . . . . . . . 29 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 6.1 Experimental Setup . . 6.2 Experimental Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 6.3 Detecting Multiple Objects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 . . CHAPTER 7 LIMITATIONS AND FUTURE WORK . . . . . . . . . . . . . . . . . . 37 CHAPTER 8 CONCLUSION . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 iv BIBLIOGRAPHY . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 v LIST OF TABLES Table 5.1: Distance estimation error measurement Table 6.1: Absolute error . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 .. 36 vi LIST OF FIGURES Figure 2.1: Chirp signal, amplitude vs time . . . . . . . . . . . . . . . . . . . . . . . . . . Figure 2.2: Chirp signal, frequency vs time . . . . . . . . . . . . . . . . . . . . . . . . . . Figure 2.3: Intermediate frequency signal . . . . . . . . . . . . . . . . . . . . . . . . . . . Figure 2.4: IF signals for multiple objects . . . . . . . . . . . . . . . . . . . . . . . . . . . Figure 2.5: 1 TX and 4 RX antenna array . . . . . . . . . . . . . . . . . . . . . . . . . . . Figure 2.6: 2 TX and 4 RX antenna array . . . . . . . . . . . . . . . . . . . . . . . . . . . Figure 2.7: 1 TX and 8 RX antenna array . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 4 5 6 8 9 9 Figure 2.8: Sinusoidal wave . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 Figure 2.9: Geometry for SAR system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 Figure 4.1: System overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 Figure 5.1: AWR1443BOOST front view . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 Figure 5.2: AWR1443 functional block diagram . . . . . . . . . . . . . . . . . . . . . . . 17 Figure 5.3: Configurable FMCW chirp . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 Figure 5.4: Multiple chirps structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 Figure 5.5: FFT data . . . . . Figure 5.6: Original FFT plot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 Figure 5.7: FFT plot after interpolation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 Figure 5.8: Experiment of distance estimation error measurement . . . . . . . . . . . . . . 27 Figure 5.9: Detection result of a planar surface . . . . . . . . . . . . . . . . . . . . . . . . 28 Figure 6.1: Prototype . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 Figure 6.2: Experiment explanation and corresponding coordinate system . . . . . . . . . . 30 vii Figure 6.3: Test objects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 Figure 6.4: Planar surface parallel to the trajectory at 1.465 m distance . . . . . . . . . . . 33 Figure 6.5: Planar surface parallel to the trajectory at 2.2 m distance . . . . . . . . . . . . . 33 Figure 6.6: Planar surface with an angle of 6.18° to the trajectory at 1.465 m distance . . . 34 Figure 6.7: Planar surface with an angle of 12.42° to the trajectory at 1.465 m distance . . . 34 Figure 6.8: Convex surface at 0.73 m distance . . . . . . . . . . . . . . . . . . . . . . . . 35 Figure 6.9: Convex surface at 1.465 m distance . . . . . . . . . . . . . . . . . . . . . . . . 35 Figure 6.10: Detection result of the wall . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 viii CHAPTER 1 INTRODUCTION 1.1 Background and Motivations Autonomous systems such as self-driving cars, drones, and home robots are transforming our lives in many ways. For example, big companies such as Tesla, Google, and Uber are developing self- driving vehicles to facilitate people’s everyday transportation. Amazon has started using drones to revolutionize their product delivery routine [1]. Vacuuming robots at homes are replacing tradi- tional vacuum machines to clean houses in an automatic manner. One fundamental element of these autonomous systems is the accurate and robust sensing of the surrounding environments. As long as the autonomous systems have the information of sur- rounding environments which includes the objects’ locations and shapes, they are able to recognize the objects and then perform the designated tasks. There are many sensing modalities having been used for object location and shape estimation. For example, cameras are good at recognizing objects’ shape, but their performance highly relies on the lighting condition. Acoustic sensors are another option. However, although many acoustic- based systems use ultrasonic waves to reduce the interference of noises in the environment, the transmission speed is quite limited because the speed of sound is much slower than light. Com- pared to cameras and acoustic sensors, millimeter wave (mmWave) has high resolution as well as the capability of sensing in the dark environment. However, traditional millimeter wave sensing transmits signal with a constant frequency. The distance estimation relies on the signal travelling time. A small error occurs on the travelling time would lead to a large distance error due to the extremely fast speed of electromagnetic wave. 1 1.2 Contributions In this thesis, we present the design, development, and evaluation of a small-size MIMO FMCW based mmWave sensing system for accurate and robust environmental sensing. Generally, small wavelength of the transmitted signal would result in small antenna size and small antenna aperture. Thus, millimeter wave is the best choice for our system. To achieve accurate distance estimation, FMCW is chosen because it has precise ranging capabilities. Specifically, our system utilizes an integrated 77-81 GHz FMCW monostatic radar which in- cludes two transmitting antennas and four receiving antennas. As the system stops at different locations on a straight line, the receiving antennas pick up the reflections of the signal that is sent out by the transmitting antennas. We have developed a MIMO FMCW mmWave signal processing pipeline to analyze the reflected signal and estimate the detected object’s surface shape, orienta- tion, curvature, boundaries, and 2D location. We evaluated our system on a metal rectangular box and a metal cylinder. The experiment results show that our system is able to determine the surface type (planar or curved) correctly and achieve cm-level accuracy on the boundaries and location estimation. The orientation error is bounded to 2.14◦. The curvature estimation is highly accurate for the middle part of the curved surface. 1.3 Thesis Organization The remainder of this thesis is organized as follows: Chapter 2 presents the background informa- tion on mmWave radar, FMCW radar and synthetic aperture radar. Chapter 3 describes related work on FMCW and mmWave sensing and briefly discusses the differences between the existing literature and our work. Chapter 4 provides an overview of our system. Chapter 5 explains the technical details of our system. Chapter 6 describes the experiments and discusses the results. Chapter 7 discusses the limitations of our work and the future work. Finally, Chapter 8 concludes the thesis. 2 CHAPTER 2 BACKGROUND 2.1 Millimeter Wave Millimeter wave is the electromagnetic wave whose wavelength is in the 1 mm to 10 mm range. The corresponding frequency band is from 30 GHz to 300 GHz. One advantage of mmWave is small sensor size. Generally, small wavelength would be lead to small antenna size and small antenna aperture. Thus, high frequency is helpful to achieve small sensor size. The transmission speed of mmWave is the same as the speed of light. This results in an extremely fast sensing speed of mmWave sensor. 2.2 Fundamentals of FMCW 2.2.1 Range Estimation Frequency-Modulated Continuous Wave (FMCW) radar is a special type of radar sensor which transmits a linear frequency-modulated continuous wave sequence. The transmitted signal is called a “chirp”. A chirp is a sinusoid signal whose frequency increases over time linearly. Figure 2.1 shows a sample of a chirp signal, with amplitude as a function of time. A frequency vs time plot is a better way to represent a chirp for easy understanding. Fig- ure 2.2 shows the frequency vs time plot of the same chirp signal. The chirp can be determined by three factors: starting frequency (fc), bandwidth (B) and duration (Tc). The slope frequency (S) represents the rate of change of frequency and can be expressed as S = B Tc (2.1) In Figure 2.3, we can find out the relationship between the transmitted chirp and the received chirp. Assume that a chirp is transmitted at t0 and a reflected chirp is received at ta. The received 3 Figure 2.1: Chirp signal, amplitude vs time Figure 2.2: Chirp signal, frequency vs time chirp is a shifted version of the transmitted chirp. The shifted length is the time of the signal travels double distance between the sensor and the object. Mathematically, the distance between the sensor and the object (d) can be expressed as (ta − t0)c 2 d = (2.2) where c is the speed of light. At ta, the frequency difference between the transmitted chirp and received chirp is Δf. Since 4 the received chirp is a shifted version of the transmitted chirp, the frequency difference remains constant along the time-axis. Now we subtract these two chirps and the new signal is called “in- termediate frequency signal” (IF signal), which is shown in the lower figure of Figure 2.3. The IF signal is valid only in the interval [ta, tb]. In other words, the two chirps must be overlapped. The frequency difference can be derived as Δf = S(ta − t0) Combine equations 2.2 and 2.3, we can get d = Δf c 2S (2.3) (2.4) Figure 2.3: Intermediate frequency signal 5 The above explanation is for the case of only one detected object. Figure 2.4 shows the case of multiple detected objects. Since the detected objects are in different distance, the traveling time of the signal is different. This results in different frequencies of IF signals from multiple objects, so we can separate the multiple objects easily and calculate their distances respectively. Figure 2.4: IF signals for multiple objects 2.2.2 Range Resolution The range resolution is defined as the ability of a radar system to resolve two objects in close distance. When the two objects are placed at very close location, the frequency of their IF signals are very similar. In this case, the radar system would treat them as one single IF signal instead of two separated IF signals. Fourier transform theory states that the minimum frequency part which is able to resolved by an observation window (duration of T) is 1 T Hz. Apply this statement to our FMCW radar system, the minimum frequency difference between two IF signals should be 1 Tc . 6 Import the minimum frequency difference to equation 2.4, the range resolution can be derived as dres = = = = Δf c 2S 1 c Tc 2S c 2STc c 2B (2.5) Therefore, the range resolution is depending on the bandwidth. Lager bandwidth will result in better range resolution. 2.2.3 Angle of Arrival Estimation Assume that one transmitting antenna (TX) and four receiving antennas (RX) are used in a FMCW radar system as shown in Figure 2.5. The distance between each two receiving antennas is d. A chirp is transmitted by the transmitting antenna and then is reflected from an object. All four re- ceiving antennas receive the reflected chirp (assume that the reflected chirps are in parallel because the object is far enough away from the receiving antennas). The angle of arrival (AoA) is marked as θ. The distance between the object and the second receiving antenna is slightly larger than the distance between the object and the first receiving antenna. The additional segment can be ex- pressed as d sin θ. At point A and point B’, the phases of the signals are same because the two chirps travel the same distance. When the second chirp travels from point B’ to point B, the phase changes from φ1 to φ2. The phase difference (Δφ) between φ1 and φ2 is 2π λ is the angular wavenumber, which is defined as the number of radians per unit distance. Therefore, λ d sin θ. Where 2π the angle of arrival can be calculated as θ = arcsin Δφλ 2πd (2.6) Similarly, the AoA can be calculated by using the phase difference between RX1 and RX3, and the phase difference between RX1 and RX4. The corresponding AoA expressions are arcsin 2Δφλ 2π2d and arcsin 3Δφλ 2π3d respectively. 7 Figure 2.5: 1 TX and 4 RX antenna array For our system hardware, the distance between each two neighboring receiving antennas is designed as λ 2 , so we can replace d with this value. The AoA would be θ = arcsin Δφ π (2.7) Now we add another transmitting antenna into the antenna array as shown in Figure 2.6. The distance between TX1 and TX2 is 4d. Thus, the signal from TX2 travels an additional distance of 4d sin θ compared to the signal from TX1. This results that the received signals from different TX on the same RX have a phase difference of 4Δφ. Therefore, the 2 TX and 4 RX MIMO radar is equivalent to the 1 TX and 8 RX radar which can be seen in Figure 2.7. Due to the measurement error in the signal processing module, larger antenna array can address a more accurate Δφ so that the accuracy of the AoA estimation is improved. Recall that the phase difference (Δφ) is equal to 2π λ d sin θ. As shown in Figure 2.8, the sensi- tivity of sin θ decreases when θ increases from 0◦ to 90◦. Thus, Δφ is most sensitive to the changes of θ when θ is equal to 0◦. During the process of θ increases from 0◦ to 90◦, Δφ becomes less and less sensitive so that the estimation of AoA becomes less and less accurate. 8 Figure 2.6: 2 TX and 4 RX antenna array Figure 2.7: 1 TX and 8 RX antenna array 2.2.4 Field of View In Figure 2.5, based on the triangle rules, d sin θ must be smaller than d. d sin θ < d Multiply 2π λ at both sides of the equation 2π λ d sin θ < 2π λ d 9 (2.8) (2.9) Figure 2.8: Sinusoidal wave The left side is the phase difference (Δφ) and replace d with λ 2 . We get λ 2 |Δφ| < 2π λ |Δφ| < π (2.10) According to equation 2.7, the limit value of θ is ±90◦. Thus, the maximum angular field of view of this radar system is ±90◦. 2.3 Synthetic Aperture Radar Synthetic aperture radar (SAR) [2] simulates an extremely large antenna array by moving the radar and uses the reflected signal strength and Doppler information to enable imaging. The biggest benefit of SAR is good angular resolution. Angular resolution is also known as cross-range reso- lution. Cross-range direction is perpendicular to the radar range direction. Generally, the angular resolution of a laser-radar can be expressed as θres = kλ D (2.11) Where k is beamwidth factor, λ is wavelength, and D is antenna aperture. We can see that larger antenna aperture can lead to a better angular resolution. By moving the radar along a straight line, a large aperture antenna array is achieved. The sensor at each location 10 on the straight line can be considered as one antenna of the aperture antenna array. Therefore, SAR can achieve an extremely large antenna array when the moving path is extremely long. This results in a very good angular resolution. Figure 2.9: Geometry for SAR system Since the sensor is moving, the object-to-sensor distance is changing. The distance from the radar to the object (consider it as a point P) is a variable (R(t)) which changes with time (t). As (vt − a)2 R2 P (2.12) shown in Figure 2.9, R(t) can be expressed as (cid:2) R(t) = RP 1 + Where v is the moving speed of the sensor. For a constant moving speed of the sensor, R(t) is unique for a certain RP . Therefore, a matched filter [3] can be applied to match R(t) with the database. After the matched sample is found, the corresponding distance from the object to the moving path (RP ) can be obtained. 11 Since the object-to-sensor distance is changing, the changed phase (φ) of the signal due to the round trip travel is − 4πR(t) the moving path is λ . According to Doppler Beam Sharpening [3], the phase variation on φ(t) ≈ − 4π λ (RP − vtx RP + x2 2RP ) (2.13) Where x is the desired cross-range position. Then the instantaneous cross-range spatial frequency (Ku) is Ku = dφ(t) du ≈ 4π λ x RP Ku can be expressed in terms of Doppler frequency (FD), so we can use FD to express x x = = λRP Ku 4π λRP FD 2v (2.14) (2.15) The Doppler frequency can be obtained by analyzing the reflected signals. The location of point P is obtained by combining RP and x. Finally, SAR integrates all detected points with reflected signal strength. The sensing images are implemented. Comparing to SAR, our system only focuses on specular reflection. Moreover, in SAR, the sensor has to move with a constant speed in order to collect Doppler information. In contrast, our system focuses on detecting static objects and remains static at each detection position on the trajectory. Therefore, Doppler information does not exist and we cannot use it to improve accuracy. 12 CHAPTER 3 RELATED WORK Our work is related to the following two research topics: (1) FMCW sensing systems and (2) mmWave sensing systems. In this section, we present a brief review of the recent relevant works and compare them with our system. 3.1 FMCW Sensing FMCW devices are widely used in various areas, such as motion detection, object localization and breathing measurement. [4] [5] use a FMCW device to track human body’s 3D location and coarse hand motion. [6] uses a FMCW device to capture human figures even if the person is completely occluded by a wall. Their system is also able to distinguish different people. Our work is different from these works in two aspects. First, our radar device is much smaller. In their systems, the distance between each two neighboring antennas is about 3-5 cm while ours is 1.9 mm. Second, our frequency band is much higher and bandwidth is larger. Their FMCW signals sweep from 5.46-7.25 GHz while ours sweeps from 77-81 GHz. Our system has more than double bandwidth that gives us a better range resolution. [7] develops ApneaApp which captures chest and abdomen movements and then identifies sleep apnea events. [8] uses the in-built microphones and speakers as the FMCW devices to track fine-grained finger motion. The main difference between our work and these two work is that they transmit 18-20 kHz sound waves by using microphones and speakers on smart phones. Our system transmits high-frequency electromagnetic waves so that the response time is much shorter and the maximum detection range is larger. 13 3.2 mmWave Sensing A lot of mmWave sensing is in 60 GHz band. [9] tracks the movement of a pen on a tablet at sub- centimeter precision with mTrack. [10] presents mmVital which can monitor human’s breathing and heart rates. [11] develops soli to sense gesture with millimeter wave radar. [12] [13] use 60 GHz sensing system to capture static objects’ location, object surface orientation, curvature, boundaries, and surface material. The main difference between our work and their work are that their transmit device and receive device are two independent parts and the transmit device must be mounted at a certain place without any movement ability. The most similar work is [14], they present a practical environment imaging system using 60 GHz radios on a single mobile device. Our work is different from their work in three aspects. First, our sensor’s size is much smaller. Their sensing device is about 30cm × 25cm while our device is 6.5cm × 9cm. Second, their antenna is a rotational beaming antenna. In order to detect the entire environment in front of the sensor, they require both transmitting antenna and receiving antenna to steer the beams towards each direction for a small period. The AoA is determined by the strongest received signal strength receiving antenna direction. Our system does not require the antennas to rotate so that the sensing time is shorter. The AoA in our system is estimated by using the phase information which provides a better accuracy since the AoA accuracy of their system is limited by the rotational granularity. Third, [14] does not use any phase information while we use it to derive AoA. 14 CHAPTER 4 SYSTEM OVERVIEW Figure 4.1: System overview Figure 4.1 shows an overview of our system architecture. First, the configured FMCW chirps are generated in the synthesizer and then the transmitting antenna transmits the chirps with an 180◦ horizontal sensing coverage. Second, the chirps are reflected from an object and the receiving an- tenna receives them. The received chirps and the original transmitted chirps are mixed in the mixer. The resulting IF signal is digitized in the analog-to-digital convert. Third, in data preprocessing module, the digitized data is averaged among multiple IF signal as a filter to reduce the error. Then Fast Fourier Transform is applied to the data so that the data is switched from the time domain to the frequency domain. Fourth, the distance and AoA are estimated. Fifth, a distance calibration process is performed due to the hardware issue. Finally, the sensor is moved to the next location on a planned trajectory and the previous five steps are repeated. Our system collects reflected signal data at multiple locations on the trajectory and all estimated points are combined to recover the surface shape of the object and determine the surface location. 15 CHAPTER 5 SYSTEM DETAILS 5.1 Hardware Platform We use AWR1443BOOST [15] as our hardware device to transmit signals and collect data. AWR1443BOOST is an evaluation board for the AWR1443 mmWave sensing chip from Texas Instruments. It con- tains AWR1443 chip, 40-pin LaunchPad connector, XDS110-based JTAG emulation with serial port, backchannel UART through USB to PC, on-board antennas, 60-pin high-density connector, on-board CAN transceiver, LEDs and 5-V power jack. The AWR1443 chip is the most important part of AWR1443BOOST. Other parts are supportive devices that are used for data interface and power supply. Figure 5.1 shows the front view of AWR1443BOOST. Figure 5.1: AWR1443BOOST front view AWR1443 chip is an integrated single-chip Frequency-Modulated Continuous Wave radar sen- sor. It mainly consists of three transmitters, four receivers, an integrated ARM R4F processor, a hardware accelerator, a synthesizer, four mixers and four analog-to-digital converters (ADC). Figure 5.2 shows the functional block diagram of AWR1443 sensor. 16 Figure 5.2: AWR1443 functional block diagram 5.2 Chirp Configuration The first step of our system is to configure the transmitted chirps. The main parameters that can be configured are transmitting antenna mask, receiving antenna mask, starting frequency, idle time, transmitting antenna starting time, ADC valid starting time, ramp end time and frequency slope. Transmitting antenna mask is a 3-bit binary sequence which is used to determine which transmit- ting antennas are turned on. “1” means “turn on” while “0” stands for “turn off”. Given “101” as an example, it means that TX1 and TX3 are turned on and TX2 is turned off. Similarly, receiving antenna mask is a 4-bit binary sequence which is used to determine which receiving antennas are turned on. In Figure 5.3, the parameters related to a single FMCW chirp can be understood easily. Starting frequency is the initial frequency of the chirp. AWR1443BOOST only supports two fre- quency band: One is from 76 GHz to 77 GHz and the other one is from 77 GHz to 81 GHz. In the discussion of section 2.2.2, we know that larger bandwidth will lead to a better range resolution, so we choose 77 GHz as the starting frequency which gives us a maximum bandwidth of 4 GHz. 17 Idle time is the time between the end of previous chirp and start of next chirp. The frequency drops from 81 GHz to 77 GHz during this period. We set idle time to 20 us which allows the frequency have enough time to drop off. Transmitting antenna starting time is the time when the transmit- ting antennas are turned on. We set this parameter to 1 us. ADC valid starting time is the time when the analog-to-digital convert starts to sample the data. As we can see from Figure 5.3, the chirp is not perfectly linear at the beginning. That is the reason why the data cannot be sampled at the beginning. We set the ADC valid starting time to 7 us so that the non-linear part is avoided. Notice that the ADC is not sampling the transmitted chirp although the ADC sampling period is drawn in Figure 5.3. The IF signal is sampled by the ADC. The frequency slope is the changing rate of the chirp frequency. We set this parameter to 40 MHz/us which is a appropriate value for short range detection. More details will be discussed with other parameters in the following para- graph. Ramp end time is the time when the transmitting antenna are turned off. After this time, the frequency is reset to the initial frequency of the next chirp (idle time of the next chirp). Since the frequency slope is 40MHz/us, the ramp end time is set to 100 us in order to fully utilize the maximum bandwidth (4 GHz). There are two additional parameters affecting the ADC sampling process: number of ADC samples collected during ADC sampling time and ADC sampling rate. Number of ADC samples (K) is the number of digital data samples which is sampled from the IF signal. ADC sampling rate (fsam) means how fast the ADC pick a sample. In other words, starting from ADC valid starting 1 time, for every be expressed as K−1 resolution can be expressed as fsam s, the IF signal is sampled. The duration of the ADC sampling process can fsam . Replace Tc in equation 2.5 with this ADC sampling duration. The range dres = = = c 2STc c 2S K−1 fsam cfsam 2S(K − 1) 18 (5.1) From equation 5.1, we can see that larger number of ADC samples, larger frequency slope and smaller ADC sampling rate can lead to better range resolution. We collect 512 samples for the resulting IF signal caused by a transmitted chirp because this parameter has to be 2n. Where n can be any positive integer. 512 is the maximum number we can choose due to the limitation of the ADC. The frequency slope is set to 40 MHz/us as mentioned before. We do not choose a much larger value to achieve a better range resolution because the frequency slope is also an important factor that affects the maximum detection range. In equation 2.4, we can see that larger frequency slope would result in smaller measured distance. Therefore, there is a trade-off between the range resolution and the maximum detection range. The ADC sampling rate also affects the maximum detection range. The ADC sampling rate must be higher than the sampled signal. Thus, the maximum value which Δf in equation 2.4 can achieve is the ADC sampling rate. If Δf is larger than the ADC sampling rate, the signal will be received but not be digitized. Higher ADC sampling rate can lead to a lager detection range but better range resolution requires a lower sampling rate, so we choose 6 MHz to be the ADC sampling frequency as an appropriate solution. These three parameters setting allows us to have a 4.4 cm range resolution and 22.5 m maximum range. From equation 2.5, we can calculate the best range resolution for AWR1443BOOST by inputting the maximum bandwidth (4 GHz). The result is 3.75 cm, so 4.4 cm is very close to the best range resolution. Figure 5.3: Configurable FMCW chirp 19 Figure 5.4: Multiple chirps structure Multiple chirps are transmitted in series when our system is working. Number of frames, frame periodicity and initial phase are the parameters that determine the multiple chirps structure. A frame can be consist of one or multiple chirps. The time between each chirp in the same frame is the idle time mentioned before. The chirps in the same frame is not necessary the same, but in our case, we use the same chirp setting for convenient data processing. Number of frames is the total number of frames that are transmitted during the whole detection process. This parameter is limited by the memory size of AWR1443BOOST. The interaction between AWR1443BOOST and our PC is through a USB cable. The data transport speed is not fast enough for real-time data transport. Thus, the ADC data is stored in the memory module of AWR1443BOOST and then our PC read and store the memory data. The memory size of AWR1443BOOST is 261120 bytes and one set of received data caused by one transmitted chirp would take 8192 bytes. Frame periodicity is the time between two frames. We set this parameter to 20 ms. Initial phase is the phase of the transmitted chirp. It can only be set to 0◦ or 180◦. We always set initial phase to 0◦ in order to ensure that all transmitted chirps are the same. However, the phase modulation has an error within ±5◦. To sum up, each frame contains only one chirp when we activate 1 TX and 4 RX. Total number of frames is 16. Each frame contains two same chirps when we activate 2 TX and 4 RX. Total number of frames is reduced to 8 due to the memory size limitation. In the same frame, TX1 transmits a chirp first and then TX2 transmit a chirp. Receiving antenna gain is a parameter that works as a high-pass filter threshold. It can be set from 24 dB to 48 dB. Higher gain setting can improve the noise figure but also can ignore some weak reflection signal. We set receiving antenna gain to 30 dB as an appropriate choice. 20 5.3 Chirp Transmission The transmitted chirp is a sinusoidal signal which can be mathematically written as Xtrans = A1sin(2πf1t + ϕ1) (5.2) Where f1 is increasing from 77 GHz to 81 GHz linearly and ϕ1 is 0◦. Then the signal is reflected from an object and the receiving antenna receives it: Xrecv = A2sin(2πf2t + ϕ2) (5.3) The mixer mixes the transmitted signal and the received signal. The resulting IF signal can be expressed as XIF = Asin(2π(f1 − f2)t + (ϕ1 − ϕ2)) = Asin(2πΔf t + Δϕ) = Asin(2πΔf t + φ) (5.4) If multiple reflected signals are received, the resulting IF signal would be the sum of all single IF signal: XIF multi = Aasin(2πΔfat + φa) + Absin(2πΔfbt + φb) + Acsin(2πΔfct + φc) + ... (5.5) 5.4 Data Preprocessing The IF signal is digitized and results in 512 samples in the form of complex numbers. In section 5.2, we state that a set of chirps are transmitted in series. Since we are detecting static objects, the received chirps are supposed to be very similar. Assume that the total number of transmitted chirps from the same TX is N, the total number of digitized samples would be 512N. We average the multiple IF signal samples as a filter to reduce error. The averaged 512 samples are transferred from time domain to frequency domain by performing Fast Fourier transform. The format of one a2 + b2 is the signal amplitude (A). The FFT data point can be expressed as a+bi. The magnitude √ 21 angle between the positive real-axis and the vector is the signal phase (φ), which can be expressed a + 180◦ for a < 0. An example of FFT data plot is shown in as arctan b Figure 5.5. Each peak that is substantially above the noise floor represents a detected object. The a for a > 0 or arctan b peak location on the frequency-axis is the frequency spectrum of one IF signal component (e.g. Δfa). Figure 5.5: FFT data 5.5 Distance Estimation Substitute the frequency spectrum into equation 2.4, the distance between the transmitting antenna and the object is obtained. However, the frequency spectrum might not be the actual peak location since the digitized data is a discrete signal. In Figure 5.6, we simply connect all the data points with straight lines. We can see that the curve is not smooth. We apply interpolation algorithm to use the existing data points to create new data points among original points so that the plot would be more 22 close to a continuous curve. We interpolate additional 32 points between each two neighboring original points by a matlab spline interpolation function [16]. The result is shown in Figure 5.7. The peaks in Figure 5.7 are more close to the actual peak so that we can use these frequency spectrums to get more accurate distance estimations. All four receiving antennas receives reflected signals. Each received signal has a unique distance estimation value. The distance estimation values are very similar. Therefore, we calculate the average value among all distance estimation values as the final distance estimation value. Figure 5.6: Original FFT plot 5.6 AoA Estimation From section 2.2.3, we know that the angle of arrival can be estimated by using the phase differ- ence. The phase of the received signal can be calculated by using the FFT data as mentioned in section 5.4. Assume that the FFT data of one peak in Figure 5.5 is a1 + b1i. In exponential form, a1 + b1i is equal to A1eiφ1. Similarly, the peak at the same frequency spectrum in the FFT plot of RX2, RX3 and RX4 can be expressed as A2eiφ2, A3eiφ3 and A4eiφ4 respectively. In section 2.2.3, we learn that the phase difference between RX1 and RX2 is Δφ, the phase difference between RX1 and RX3 is 2Δφ, and the phase difference between RX1 and RX4 is 3Δφ. Using the FFT data to 23 Figure 5.7: FFT plot after interpolation express this relationship, we get: In exponential form, we have φ2 − φ1 = Δφ φ3 − φ1 = 2Δφ φ4 − φ1 = 3Δφ eiφ2 = eiφ1eiΔφ eiφ3 = eiφ1e2iΔφ eiφ4 = eiφ1e3iΔφ (5.6) (5.7) Theoretically, we only need one pair (φ1 and φ2) to extract the value of Δφ. However, errors would occur in the process of phase modulation, IF signal production and signal digitization. Therefore, we want to use as many as possible pairs to extract the value of Δφ precisely. To extract the value of Δφ by using 1 TX and 4 RX antenna array, we define a function y = A1eiφ1e−i0 + A2eiφ2e−ix + A3eiφ3e−i2x + A4eiφ4e−i3x (5.8) Where A1eiφ1, A2eiφ2, A3eiφ3 and A4eiφ4 is the exponential form of FFT data a1 + b1i, a2 + b2i, a3 + b3i and a4 + b4i respectively at the same frequency spectrum of a peak. x is a variable. 24 When x is equal to Δφ, combine equation 5.7 and 5.8, we get y = A1eiφ1e−i0 + A2eiφ1eiΔφe−iΔφ + A3eiφ1e2iΔφe−i2Δφ + A4eiφ1e3iΔφe−i3Δφ = A1eiφ1 + A2eiφ1 + A3eiφ1 + A4eiφ1 (5.9) = (A1 + A2 + A3 + A4)eiφ1 The function y reaches the maximum value. The interval of Δφ is [−π, π] as proved in equation 2.10. Thus, we let the variable x changes from −π to π. When the function y reaches the maximum value, the value of x is the desired Δφ. Then input the calculated Δφ to equation 2.7, the angle of arrival is derived. When we use 2 TX and 4 RX antenna array, more pairs would be achieved. 5.7 Distance Calibration During the test, we noticed that there existed an approximate 6 cm error on distance estimation. This issue might be caused by the physical antennas. The transmission speed is much slower on the board. This results that the frequency of the signal has already changed a lot when the signal leaves the board. Fortunately, this error seems to remain constant no matter how the object-to- sensor distance changes. To address the accurate distance estimation error, we placed an object in front of the radar with various known distance and measured the distance by using our system as shown in Figure 5.8 . The difference between the known distance and measured distance is considered as the distance calibration value which is used in the distance calibration process. We changed the known distance from 0.5 m to 2.3 m and calculated the average error as shown in Table 5.1. The final distance calibration value is 6.52 cm. 5.8 Object Location and Shape Estimation Due to the specular reflection, our system can only capture the reflection signal from one small segment of the detected object surface. In order to estimate the object’s surface shape, orientation, curvature, boundaries, and 2D location, our sensor has to collect reflected signal data at multiple locations on a trajectory (straight line) so that a lot of small segments of the object’s surface can 25 Table 5.1: Distance estimation error measurement actual distance (m) estimated distance (m) error (m) average error (m) 0.5 0.6 0.7 0.8 0.9 1 1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8 1.9 2 2.1 2.2 2.3 0.0652 0.0699 0.0609 0.0601 0.0645 0.0671 0.0595 0.0656 0.0703 0.0613 0.0574 0.0714 0.0689 0.0609 0.0612 0.0724 0.0658 0.0585 0.0721 0.0717 0.5699 0.6609 0.7601 0.8645 0.9671 1.0595 1.1656 1.2703 1.3613 1.4574 1.5714 1.6689 1.7609 1.8612 1.9724 2.0658 2.1585 2.2721 2.3717 26 Figure 5.8: Experiment of distance estimation error measurement be detected. Each small segment is represented by a 2D location points. Then all the points are integrated to a curve as the recovered surface by a matlab polynomial curve fitting function [17]. For planar surface, there exists a “parallel path” that the angle of arrival is nearly the same when the sensor scans the object along the trajectory. Also, for the detected points that estimated by the data which is collected outside the path, their location should be very close because the sensor is actually receiving the reflected signal from the object’s surface boundary as shown in Figure 5.9. The points on the top are estimated points of the object’s surface. The points on the bottom are sensor locations which are defined as the middle position of the four receiving antenna array. Therefore, when there exists a set of AoA is very similar, we recognize that the target object’s surface is flat. The estimation boundary is determined as the midpoint among all points at the surface boundary. All the points are fit to a linear function. For curved surface, the AoA is always changing. There also exists a lot of detected points on the surface boundaries. However, our test curved object is a cylinder. It is impossible to receive reflection signal from the exact boundaries because the specular reflection path is supposed to be the connecting line between the sensor and the cylinder center. This connecting line is impossible to cross the boundary. Therefore, there does not exist a lot of detected points on the surface boundaries in this case. The estimation boundary is determined as the farthest point away from the 27 surface center. All the points are fit to a quadratic function. Figure 5.9: Detection result of a planar surface 28 CHAPTER 6 SYSTEM EVALUATION 6.1 Experimental Setup Our system prototype is consist of AWR1443BOOST, a portable charger and a tablet (see Fig- ure 6.1). The components are connected via a micro USB cable and a USB to barrel jack cable. AWR1443BOOST is mounted on two brackets so that the sensing device can be placed vertically. The portable charger is Anker PowerCore 26800 Portable Charger. The tablet is Microsoft Sur- face Pro. The software that we use are Code Composer Studio [18] and Tera Term [19]. Code Composer Studio is used to connect AWR1443BOOST with the tablet, activate AWR1443BOOST and read the ADC data in the memory of AWR1443BOOST. Tera Term is used to enter the chirps configuration command and turn on/off AWR1443BOOST. As discussed in section 5.2, we always send the same chirp with a starting frequency of 77 GHz, a slope of 40 MHz/us, and a duration of 100 us. We perform experiments in a building corridor (of size 50m × 3.65m). To minimize the impact of obstacles, we place our radar with the direction perpendicular to the wall as shown in Figure 6.2(a). A metal rectangular box (see Figure 6.3(a) ) and a metal cylinder (see Figure 6.3(b)) are our test objects. The width of the longer sides of the metal rectangular box is 47 cm and the width of the shorter sides is 22.5 cm for the bottom. The diameter of the metal cylinder is 25.5 cm for the bottom. As shown in Figure 6.2(a), we place a tape (0-3m) parallel to the wall as the trajectory and the object is placed in the interval between 1 m and 1.8 m randomly. The sensor moves from the position 0.1 m to 2.2 m with a granularity of 5 cm. At each location, the sensor transmits signals and collect reflected signals data. 29 Figure 6.1: Prototype Figure 6.2: Experiment explanation and corresponding coordinate system 6.2 Experimental Results Our system estimates the object’s surface shape, orientation, curvature, boundaries and 2D lo- cation. We construct a coordinate system (see Figure 6.2(b)) to display the estimation points, recovered surface curve and the actual surface curve. The origin is the starting point (0 m) of the trajectory. The x-axis is the extension of the trajectory and the left side is negative. The y-axis is perpendicular to the x-axis and the side towards the object is positive. We quantify the accuracy 30 Figure 6.3: Test objects by calculate the absolute error of the surface boundaries, surface location and orientation. When the sensor directly faces to the object (the angle of arrival is close to 0◦), the reflection strength is the strongest. With the magnitude of AoA increases, the reflection strength is weaker and weaker. During the process of collecting data along the trajectory, some received signal is very weak at the position close to both ends of the trajectory. We ignore the data of received signal when the received signal strength is lower than 46 dB (noise floor). We define the ground level of the objects’ surfaces as the ground truth. Although the objects’ surfaces are not perfectly perpendicular to the ground, we ignore the small difference because the antennas are not much higher than the ground level. As mentioned in section 5.2, the total number of transmitted chirps per TX is 16 when we activate 1 TX and 4 RX due to the limitation of the AWR1443BOOST’s memory size. When 2 TX and 4 RX are used, the total number of transmitted chirps per TX decreases to 8. Thus, the data is more stable when 1 TX and 4 RX are used but double antenna array is achieved when 2 TX and 4 RX are used. We compare the performance between 1 × 4 and 2 × 4 antenna array in the experiments. For the planar surface, we vary the object-to-sensor distance between 1m and 2.2m, and the object’s surface-to-trajectory angle between 0◦ and 15◦. (In order to capture specular reflection). 31 For the curved surface, the object-to-sensor distance is various from 0.7m to 1.5m. Figure 6.4-6.7 show the recovered surface versus actual surface plots for planar surfaces at dif- ferent distance to the trajectory and with different object’s surface-to-trajectory angle. Figure 6.8 and 6.9 show the recovered surface versus actual surface plots for convex surfaces at different dis- tance to the trajectory. We use the absolute error of the boundaries estimation, maximum distance between the recovered surface and the actual surface, and orientation estimation to express the difference between the recovered surface and the actual surface as shown in Table 6.1. From Figure 6.4-6.9 and Table 6.1, we can see that the overall performance of 2 × 4 antenna array is better than 1 × 4 antenna array. The recovered surface by 2 × 4 array is closer to the actual surface. The average boundary estimation error of 2 X 4 array is 2.34 cm while the average boundary estimation error of 1× 4 array is 4.86 cm. The average maximum distance error between the recovered surface and the actual surface is 2.44 cm for 2 × 4 array and 4.15 cm for 1 × 4 array. The average orientation error of planar surface is 1.23◦ for 2 × 4 array and 2.09◦ for 1 × 4 array. These results prove that more transmitting antennas and receiving antennas can improve the estimation accuracy. Comparing Figure 6.4 with Figure 6.5 and Figure 6.8 with Figure 6.9, we can see that the estimation errors increase when the object-to-sensor distance increases. The reason is that the signal-to-noise ratio increases when the object-to-sensor distance increases. The estimated points on the planar surface boundaries are slightly scattered because the metal rectangular box is not a perfect rectangular. The shape of its corners is small convex. The recovered curve is very close to the actual surface for the cylinder when the AoA is small. When the sensor moves from the center to the boundaries of the convex surface, the recovered curve becomes less and less accurate. This is reasonable because the AoA estimation accuracy decreases when AoA increases as proved in section 2.2.3. This is also why the estimation accuracy decreases when the planar surface slope increases. 32 Figure 6.4: Planar surface parallel to the trajectory at 1.465 m distance Figure 6.5: Planar surface parallel to the trajectory at 2.2 m distance 6.3 Detecting Multiple Objects We did not detect the two test objects at the same time as a experiment of detecting multiple objects. However, in the experiments of detecting single object, the reflection signal from the wall is also captured. We use the data which is collected in the experiment of detecting a planar surface 33 Figure 6.6: Planar surface with an angle of 6.18◦ to the trajectory at 1.465 m distance Figure 6.7: Planar surface with an angle of 12.42◦ to the trajectory at 1.465 m distance parallel to the trajectory at 1.465 m distance by 2× 4 antenna array when the sensor is moved from position 0.1 m to 1 m to recover the wall’s surface. The result is shown in Figure 6.10. Our system has the ability of detecting multiple objects. 34 Figure 6.8: Convex surface at 0.73 m distance Figure 6.9: Convex surface at 1.465 m distance 35 Table 6.1: Absolute error left boundary error (cm) right boundary error (cm) max distance error (cm) orientation error (deg) Figure 6.4(a) Figure 6.4(b) Figure 6.5(a) Figure 6.5(b) Figure 6.6(a) Figure 6.6(b) Figure 6.7(a) Figure 6.7(b) Figure 6.8(a) Figure 6.8(b) Figure 6.9(a) Figure 6.9(b) 1.11 2.35 1.37 1.76 1.85 2.55 2.6 2.16 0.28 0.33 1.86 1.06 8.56 3.63 10.85 4.94 4.72 2.83 8.91 3.95 4.34 0.83 11.86 1.69 3.06 2.05 2.35 2.08 0.92 1.03 2.28 1.82 4.44 1.57 11.86 4.38 2.47 0.56 3.35 2.14 0.53 0.87 2.002 1.34 / / / / Figure 6.10: Detection result of the wall 36 CHAPTER 7 LIMITATIONS AND FUTURE WORK Our work has a number of limitations. In particular, memory size limits the number of transmitted chirps for each detection process. The data transmission speed through the micro USB cable is not fast enough for the real-time ADC data transmission. Thus, we have to store the ADC data in the memory of the radar and then extract it to PC, but the memory size is very limited (only supports to transmit 16 chirps). TSW1400 [20] and DevPack [21] are the deceives that can help AWR1443BOOST capture and transmit real-time ADC data to PC. Then we can transmit more chirps per detection process so that the averaged data would not be affected by the noise or random interference badly. The granularity on the trajectory is not small enough. We set the granularity on the trajectory to 5 cm because it will take much more time to collect the data if we decrease the granularity. The movement of the sensor on the trajectory relies on human’s hands. It takes about one hour to collect data at 25 different positions. To solve this issue, we can put the radar on a robot car. The robot moves on the trajectory with a constant speed and our radar keeps transmitting and receiving chirps. With that, a smaller granularity can be achieved so that more points would be estimated on the object’s surface. The recovered surface would be more accurate and especially the boundaries estimation would be improved. The monostatic synthetic aperture radar algorithm [2] can also be applied to improve the accuracy. With the help of TSW1400, DevPack and robot car, the trajectory can become a rectangular closed loop instead of a straight line. This means that the radar is able to scan all sides of the object. The entire shape of the object can be estimated, not only one side. Furthermore, a navigation system can be implemented since the radar is able to detect the real-time position of the object. As long as the object’s location is determined, the PC can send corresponding action command to the robot car. 37 3D localization is possible to be achieved in the future. There is one transmitting antenna on AWR1443BOOST that we have not used in our system. This transmitting antenna is in the middle position between the other two transmitting antennas. This transmitting antenna is also higher than the other two transmitting antennas. Therefore, there exists a horizontal phase difference (φx) and a vertical phase difference (φy) when we compare the received signals on the same receiving antenna form different transmitting antennas. Thus, the vertical angle can be derived through the vertical phase difference. We did not test 3D localization in our experiments because the memory size is limited. If three transmitting antennas are enabled, the number of transmitted chirps per transmitting antenna will decrease to 5 which is too small for accurate sensing. 38 CHAPTER 8 CONCLUSION In this thesis, we describe the design, development, and evaluation of a MIMO FMCW millime- ter wave sensing system that is able to estimate the object’s surface shape, orientation, curvature, boundaries, and 2D location. Our sensing system is able to distinguish the surface type correctly. For planar surface, the boundaries estimation error is bounded by 4.94 cm, the surface-to-radar distance estimation error is bounded by 2.08 cm, and the orientation estimation error is bounded by 2.14◦ with 2 × 4 antenna array. For curved surface, the maximum boundaries estimation er- ror is 1.69 cm and the maximum surface-to-radar distance estimation error is 4.38 cm. Based on the promising results we have achieved, we believe this thesis has a unique contribution to the mmWave-based objection location and shape estimation. We hope our work could improve existing autonomous systems and bring values to our society in the future. 39 BIBLIOGRAPHY 40 BIBLIOGRAPHY [1] “Amazon Prime Air,” https://www.youtube.com/watch?v=vNySOrI2Ny8. [2] M. Soumekh, Synthetic aperture radar signal processing. New York: Wiley, 1999, vol. 7. [3] M. A. Richards, Fundamentals of radar signal processing. Tata McGraw-Hill Education, 2005. [4] F. Adib, Z. Kabelac, D. Katabi, and R. C. Miller, “3d tracking via body radio reflections.” in NSDI, vol. 14, 2014, pp. 317–329. [5] F. Adib, Z. Kabelac, and D. Katabi, “Multi-person localization via rf body reflections.” in NSDI, 2015, pp. 279–292. [6] F. Adib, C.-Y. Hsu, H. Mao, D. Katabi, and F. Durand, “Capturing the human figure through a wall,” ACM Transactions on Graphics (TOG), vol. 34, no. 6, p. 219, 2015. [7] R. Nandakumar, S. Gollakota, and N. Watson, “Contactless sleep apnea detection on smart- phones,” in Proceedings of the 13th Annual International Conference on Mobile Systems, Applications, and Services. ACM, 2015, pp. 45–57. [8] R. Nandakumar, V. Iyer, D. Tan, and S. Gollakota, “Fingerio: Using active sonar for fine- grained finger tracking,” in Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems. ACM, 2016, pp. 1515–1525. [9] T. Wei and X. Zhang, “mtrack: High-precision passive tracking using millimeter wave ra- dios,” in Proceedings of the 21st Annual International Conference on Mobile Computing and Networking. ACM, 2015, pp. 117–129. [10] Z. Yang, P. H. Pathak, Y. Zeng, X. Liran, and P. Mohapatra, “Monitoring vital signs using millimeter wave,” in Proceedings of the 17th ACM International Symposium on Mobile Ad Hoc Networking and Computing. ACM, 2016, pp. 211–220. [11] J. Lien, N. Gillian, M. E. Karagozler, P. Amihood, C. Schwesig, E. Olson, H. Raja, and I. Poupyrev, “Soli: Ubiquitous gesture sensing with millimeter wave radar,” ACM Transac- tions on Graphics (TOG), vol. 35, no. 4, p. 142, 2016. [12] Y. Zhu, Y. Zhu, Z. Zhang, B. Y. Zhao, and H. Zheng, “60ghz mobile imaging radar,” in Pro- ceedings of the 16th International Workshop on Mobile Computing Systems and Applications. ACM, 2015, pp. 75–80. [13] Y. Zhu, Y. Zhu, B. Y. Zhao, and H. Zheng, “Reusing 60ghz radios for mobile radar imag- ing,” in Proceedings of the 21st Annual International Conference on Mobile Computing and Networking. ACM, 2015, pp. 103–116. 41 [14] Y. Zhu, Y. Yao, B. Y. Zhao, and H. Zheng, “Object recognition and navigation using a single networking device,” in Proceedings of the 15th Annual International Conference on Mobile Systems, Applications, and Services. ACM, 2017, pp. 265–277. [15] “AWR1443BOOST,” http://www.ti.com/tool/AWR1443BOOST. [16] “Interp1,” https://www.mathworks.com/help/matlab/ref/interp1.html. [17] “polyfit,” https://www.mathworks.com/help/matlab/ref/polyfit.html. [18] “Code Composer Studio,” http://www.ti.com/tool/CCSTUDIO. [19] “Tera Term,” https://ttssh2.osdn.jp/index.html.en. [20] “TSW1400,” http://www.ti.com/tool/TSW1400EVM. [21] “mmWave Development Pack,” http://www.ti.com/tool/MMWAVE-DEVPACK. 42