WIRELESS PHASE AND FREQUENCY SYNCHRONIZATION FOR DISTRIBUTED PHASED ARRAYS By Serge R. Mghabghab A DISSERTATION Submitted to Michigan State University in partial fulfillment of the requirements for the degree of Electrical Engineering – Doctor of Philosophy 2022 ABSTRACT WIRELESS PHASE AND FREQUENCY SYNCHRONIZATION FOR DISTRIBUTED PHASED ARRAYS By Serge R. Mghabghab Distributed microwave wireless systems have the potential to dramatically reshape wireless technologies due to their ability to provide improvements in robustness, transmit power, antenna gain, spatial and temporal resolutions, size, scalability, secrecy, flexibility, and cost compared to single-platform wireless systems. Traditional wireless systems use a platform-centric model, where improving capabilities generally necessitates hardware retrofitting, which in many cases can result in a bulky, expensive, and inefficient system. Nevertheless, distributed microwave wireless systems require precise coordination to enable cooperative operation. The most highly synchro- nized systems coordinate at the wavelength level, supporting coherent distributed operations like beamforming. The electric states that need to be synchronized in coherent distributed arrays are mainly the phase, frequency, and time; the synchronization can be accomplished using multiple architectures. All coordination architectures can be grouped under two categories: open loop and closed loop. While closed-loop systems use feedback from the destination, open-loop coherent distributed arrays must synchronize their electrical states by only relying on synchronization sig- nals stemming from within the array rather than depending on feedback signals from the target. Although harder to implement, open-loop coherent arrays enable sensing and other delicate com- munications applications, where feedback from the target is not possible. In this thesis, I focus on phase alignment and frequency synchronization for open-loop co- herent distributed antenna arrays. Once the phase and frequency of all the nodes in the array are synchronized, it is possible to coherently beamform continuous wave signals. When information is modulated on the transmitted continuous waves, time alignment between the nodes is needed. However, time alignment is generally less stringent to implement since its requirements are de- pendent on the information rate rather than the beamforming frequency, such as for phase and frequency synchronization. Beamforming at 1.5 GHz is demonstrated in this thesis using a two- node open-loop distributed array. For the presented architecture, the phases of the transmitting nodes are aligned using synchronization signals incoming from within the array, without any feed- back from the destination. A centralized phase alignment approach is demonstrated, where the secondary node(s) minimize their relative phase offsets to that of the primary node by locating the primary node and estimating the phase shift imparted by the relative motion of the nodes. A high accuracy two-tone waveform is used to track the primary node using a cooperative approach. This waveform is tested with an adaptive architecture to overcome the performance degradation due to weather conditions and to allow high ranging accuracy with minimal spectral footprint. Wireless frequency synchronization is implemented using a centralized approach that allows phase track- ing, such that the frequencies of the secondary nodes are locked to that of the primary node. Once the phase and frequency of all the nodes are synchronized, it is possible to coherently beamform in the far field as long as the synchronization is achieved with the desired accuracy. I evaluate the required localization accuracies and frequency synchronization intervals. More importantly, I demonstrate experimentally the first two-node open-loop distributed beamforming at 1.5 GHz with multiple scenarios where the nodes are in relative motion, showing the ability to coherently beamform in a dynamic array where no feedback from the destination is needed. ACKNOWLEDGEMENTS I would like to express my deepest appreciation to my advisor Dr. Jefferey Nanzer for all the support and help that he has given me throughout my Ph.D. Your valuable expertise, encourage- ment, and guidance helped me overcome all the difficulties and achieve my goals. I am glad that being part of your team made me a part of the Electromagnetics Research Group at Michigan State University as well. I would also like to express my deepest appreciation to my committee which included Dr. Ming Yan, Dr. Prem Chahal, Dr. Shanker Balasubramaniam. You always had a pos- itive attitude, helped me with my questions and concerns, and made my journey easy. In addition to committee, I would like to thank Dr. Edward Rothwell for the helpful advice and mentoring. Thank you to Defense Advanced Research Projects Agency and Office of Naval Research for funding my education and research, and for providing me with the tools that I needed to make a difference. In addition, I am deeply grateful to all the friends and colleagues that I have met at MSU and in both the Electromagnetics Research Group and the DELTA group. Namely, I would like to thank Stavros Vakalis, Xenofon Konstantinou, Anton Schlegel, Daniel Chen, Jason Merlo, Abdel Alsnayyan, Omkar Ramachandran, Sean Ellison, Neda Nourshamsi, Ahona Bhattacharyya, Amer Abu Arisheh, Will Torres, Jorge Colon-Berrios, Cory Hilton, Michael Craton, Vinny Gjokaj. Without you, my graduate life would have been lonelier and harder. Finally I would like to thank my family, friends, and wife Sarah for believing in me and supporting me. TABLE OF CONTENTS LIST OF TABLES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viii LIST OF FIGURES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix LIST OF ALGORITHMS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxi CHAPTER 1 INTRODUCTION . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.1 Coordination of Principal Parameters . . . . . . . . . . . . . . . . . . . . . . . . . 2 1.1.1 Time Alignment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 1.1.2 Phase Alignment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 1.1.3 Frequency locking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 1.2 Architectures of Coherent Distributed Arrays . . . . . . . . . . . . . . . . . . . . 8 1.3 Main Contribution of this Dissertation . . . . . . . . . . . . . . . . . . . . . . . . 12 CHAPTER 2 LOCALIZATION REQUIREMENTS FOR OPEN-LOOP COHERENT DISTRIBUTED ARRAYS . . . . . . . . . . . . . . . . . . . . . . . . . . 15 2.1 Coherent Gain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 2.2 Ranging Requirements for Centralized Open-Loop Coherent Distributed Arrays . . 16 2.2.1 Ranging Requirements with Wired Frequency Synchronization . . . . . . . 17 2.2.2 Ranging Requirements with Wireless Frequency Synchronization . . . . . 20 2.3 Localization Requirements for Centralized Open-Loop Coherent Distributed Arrays 25 2.3.1 Localization using Range and Angle of Arrival . . . . . . . . . . . . . . . 25 2.3.2 Localization using Multilateration . . . . . . . . . . . . . . . . . . . . . . 31 CHAPTER 3 FREQUENCY SYNCHRONIZATION REQUIREMENTS FOR OSCILLATORS AND PHASE-LOCKED LOOPS . . . . . . . . . . . . . . 35 3.1 Phase Noise of and Operation of Oscillators and Phase-Locked Loops . . . . . . . 36 3.1.1 Oscillator Phase Noise . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 3.1.2 General Architecture of PLLs . . . . . . . . . . . . . . . . . . . . . . . . 38 3.1.3 PLL Phase Noise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 3.2 Phase Noise Interpretation of Locked and Unlocked Oscillators in Frequency and Time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 3.2.1 Time Jitter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 3.2.2 Allan Deviation and Frequency Drift . . . . . . . . . . . . . . . . . . . . . 47 3.2.3 Time Drift . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 3.3 Distributed Beamforming Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . 50 3.4 Practical Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52 3.4.1 Beamforming Performance with Continuous Synchronization . . . . . . . . 56 3.4.2 Beamforming Performance with Periodic Synchronization with Phase and Frequency Errors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58 3.4.3 Beamforming Performance with Periodic Synchronization with Phase, Frequency, and Timing Errors . . . . . . . . . . . . . . . . . . . . . . . . 60 v CHAPTER 4 WIRELESS FREQUENCY SYNCHRONIZATION FOR CENTRALIZED SYSTEMS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62 4.1 Frequency Synchronization Circuit . . . . . . . . . . . . . . . . . . . . . . . . . . 62 4.2 Output of the Frequency Synchronization Circuit . . . . . . . . . . . . . . . . . . 65 4.3 Impact of Doppler Frequency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68 4.4 Phase Change Due To Frequency Synchronization Signal Propagation . . . . . . . 70 CHAPTER 5 ADAPTIVE RANGING THAT SUPPORTS COHERENT BEAMFORMING OPERATIONS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73 5.1 CRLB of Range Estimates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73 5.1.1 CRLB of Typical Two-Tone Waveforms . . . . . . . . . . . . . . . . . . . 76 5.1.2 CRLB of Multi-Tone Waveforms . . . . . . . . . . . . . . . . . . . . . . . 79 5.2 High Accuracy Ranging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83 5.2.1 Minimization of Ranging Biases . . . . . . . . . . . . . . . . . . . . . . . 86 5.2.1.1 Peak Estimators . . . . . . . . . . . . . . . . . . . . . . . . . . 88 5.2.1.2 Comparison Between the Peak Estimators . . . . . . . . . . . . . 91 5.2.2 Scalability of Two-Tone Waveforms . . . . . . . . . . . . . . . . . . . . . 94 5.3 Adaptive Ranging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98 5.3.1 Using the Standard Deviation of Multiple Range Estimates as Feedback . . 100 5.3.1.1 Adaptive Ranging Framework . . . . . . . . . . . . . . . . . . . 100 5.3.1.2 Adaptive Indoor Ranging Experiment . . . . . . . . . . . . . . . 107 5.3.1.3 Adaptive Outdoor Ranging Experiment . . . . . . . . . . . . . . 111 5.3.2 Using Only One Ranging Pulse as Feedback . . . . . . . . . . . . . . . . . 123 5.3.2.1 SNR estimation from One Pulse . . . . . . . . . . . . . . . . . . 125 5.3.2.2 Ranging Standard Deviation Estimation from SNR . . . . . . . . 127 5.3.2.3 SNR-Based Perception Design . . . . . . . . . . . . . . . . . . . 134 5.3.2.4 Adaptive Ranging Results . . . . . . . . . . . . . . . . . . . . . 137 CHAPTER 6 BEAMFORMING EXPERIMENTS . . . . . . . . . . . . . . . . . . . . . 140 6.1 Effects of the Displacement of Nodes on the Beamforming Performance . . . . . . 140 6.1.1 Frequency Locking Circuit Receiver Displacement . . . . . . . . . . . . . 144 6.1.2 Primary Transmitter Displacement . . . . . . . . . . . . . . . . . . . . . . 145 6.1.3 Full Node Displacement . . . . . . . . . . . . . . . . . . . . . . . . . . . 146 6.2 Coherent Beamforming using Frequency Locking Circuit and Ranging . . . . . . . 147 6.2.1 Range Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147 6.2.2 Distributed Beamforming Experiment . . . . . . . . . . . . . . . . . . . . 151 CHAPTER 7 TWO-DIMENSIONAL LOCALIZATION TECHNIQUE THAT ENABLES BEAMFORMING . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154 7.1 TOA Using Only One Secondary Node . . . . . . . . . . . . . . . . . . . . . . . . 154 7.2 Experimental Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156 7.3 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159 CHAPTER 8 CONCLUSION . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161 vi APPENDICES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163 APPENDIX A MATLAB RANGING REQUIREMENTS FOR COHERENT DISTRIBUTED ARRAYS . . . . . . . . . . . . . . . . . . . . . . . 164 APPENDIX B MATLAB FREQUENCY SYNCHRONIZATION REQUIREMENTS FOR NODES WITH PLL . . . . . . . . . . . . . . . . . . . . . . . . 174 APPENDIX C MATLAB PEAK ESTIMATORS FOR RANGING . . . . . . . . . . 181 APPENDIX D MATLAB SNR ESTIMATION USING EITHER EIGENVALUE DECOMPOSITION OR MATCHED FILTERING . . . . . . . . . . . 190 APPENDIX E MATLAB TIME OF ARRIVAL (TOA) ALGORITHM . . . . . . . . 194 APPENDIX F MATLAB CRAMER-RAO LOWER BOUND FOR TIME OF ARRIVAL (TOA) . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198 BIBLIOGRAPHY . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 204 vii LIST OF TABLES Table 3.1: SSB phase noise of the selected 100 MHz oscillators. . . . . . . . . . . . . . . . 53 Table 5.1: Ranging RMSE for the three ranging algorithms. . . . . . . . . . . . . . . . . . 95 Table 7.1: Standard deviation (STD) of the estimated locations from experimental mea- surements versus CRLB in meters. From [1] © 2021 IEEE. . . . . . . . . . . . . 157 viii LIST OF FIGURES Figure 1.1: General architecture of coherent distributed arrays where the nodes achieve synchronization using inter-node signals and external signals from the tar- geted location. Once all the nodes are synchronized, the transmitted signals add coherently at the target and provide an increased signal power. . . . . . . . 2 Figure 1.2: Two ideal received signals along with their summation from two perfectly synchronized nodes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 Figure 1.3: Two ideal received signals along with their summation from two nodes per- forming phase and frequency synchronization without time alignment. . . . . . 3 Figure 1.4: Two ideal received signals along with their summation from two nodes per- forming time and frequency synchronization without phase alignment. . . . . . 5 Figure 1.5: Two ideal received signals along with their summation from two nodes per- forming time and phase synchronization without frequency synchronization. . . 6 Figure 1.6: Receiver-coordinated beamforming using explicit-feedback (adapted from [2]). The receiver allocates the channel sounding period time slots along with the beamforming time slot. The nodes transmit an uncompensated car- rier in their allocated sounding slot and the target transmits back the feedback signals afterwards to synchronize their phase and frequency. Once synchro- nized the nodes beamform at the dedicated beamforming slot. From [3] © 2021 IEEE. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 Figure 1.7: Retrodirective beamforming where time slots are assigned so that the nodes can synchronize their frequencies to that of the target. The channels are estimated between the nodes and the target, a phase offset is applied so that the secondary nodes are phase aligned to the phase of the primary node. From [3] © 2021 IEEE. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 Figure 1.8: Open-loop distributed phased arrays beamforming at an angle θ0 in the far field. Time, phase, and frequency are synchronized using internode signaling in a centralized architecture. Localizing the primary and/or the secondary nodes is important to align the phases. . . . . . . . . . . . . . . . . . . . . . . 10 Figure 2.1: Two nodes with a hierarchical architecture beamforming at a target in the far field. Frequency synchronization is achieved using a cabled connection, where the primary node transmits a reference frequency signal to lock the PLL of the secondary node. Beamforming is done by accounting for the phase shift produced by the relative motion between the two transmitters. . . . . 18 ix Figure 2.2: Probability of the coherent gain exceeding 0.9 versus ranging RMSE σd in terms of the signal wavelength λ considering the effects of phase changes where no wireless frequency synchronization is considered. . . . . . . . . . . . 20 Figure 2.3: Probability of the coherent gain exceeding 0.9 versus the transmission orien- tation in the case where frequency synchronization does not contribute to the phase errors. From [4] © 2020 IEEE. . . . . . . . . . . . . . . . . . . . . . . . 21 Figure 2.4: Two nodes with a hierarchical architecture beamforming at a target in the far field. Frequency synchronization is achieved wirelessly, where the sec- ondary node locks its oscillator phase to that of the incoming synchronization signals from the primary node. Beamforming is done by accounting for the phase shift produced by the relative motion between the two transmitters and between the synchronization antennas. From [4] © 2020 IEEE. . . . . . . . . . 22 Figure 2.5: Probability of the coherent gain exceeding 0.9 versus ranging RMSE σd in terms of the signal wavelength λ considering the effects of phase changes where wireless frequency synchronization is present (wireless frequency syn- chronization using phase locking approaches). From [4] © 2020 IEEE. . . . . . 23 Figure 2.6: Probability of the coherent gain exceeding 0.9 versus the transmission ori- entation in the case where wireless frequency synchronization contributes to the phase errors. From [4] © 2020 IEEE. . . . . . . . . . . . . . . . . . . . . . 24 Figure 2.7: Fifty thousand Monte Carlo simulation showing various threshold values for achieving coherent gain. The probabilities were estimated as a function of angle and localization standard deviation. The strictest requirements for lo- calization uncertainty appear around 90◦ . . . . . . . . . . . . . . . . . . . . . . 25 Figure 2.8: Probability of the coherent gain exceeding 0.9 versus the transmission orien- tation in the case where wireless frequency synchronization contributes to the phase errors. These results consider the case where the primary node locks it frequency to that of the secondary node in a two-node array. From [4] © 2020 IEEE. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 Figure 2.9: Hierarchical open-loop coherent distributed array beamforming at an angle θ0 . The angles θ1 and θ2 represent the angles of the primary node relative to the planes of the secondary nodes. From [5] © 2021 IEEE. . . . . . . . . . . . 27 p Figure 2.10: Radial error σx2n + σy2n in terms of λ for maximum and minimum coordi- nates of ± 10 m for x and y in the case of σdn = 0.01λ and σθn = 1◦ /m. From [5] © 2021 IEEE. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 x Figure 2.11: Probability of the coherent gain to exceed 0.9 versus the maximum allowed x and y coordinates, where σdn = 0.01λ and σθn = 0.1◦ /m. From [5] © 2021 IEEE. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 Figure 2.12: The required σθ to maintain a 0.9 coherent gain with 90% probability for 10 nodes with multiple values of σd versus the maximum allowed coordinates for x and y. From [5] © 2021 IEEE. . . . . . . . . . . . . . . . . . . . . . . . 30 Figure 2.13: Hierarchical open-loop coherent distributed arrays that rely on multilatera- tion using (a) TOA and (b) TDOA measurements with circles (in (a)) and hyperbolas (representing the time difference of arrival between every pair of nodes in (b)) respectively to estimate the location of the primary node and synchronize the phase of the secondary nodes. . . . . . . . . . . . . . . . . . . 32 p Figure 2.14: CRLB of the Transmitter localization in terms of Radial error σx2n + σy2n as a function of λ for TOA systems with (a) 3 nodes and σd = 0.01λ, (b) 3 nodes and σd = 0.1λ, (c) 3 nodes and σd = 0.1λ, (d) 4 nodes and σd = 0.1λ, (e) 4 nodes and σd = 0.05λ, (f) 5 nodes and σd = 0.1λ. The secondary nodes (anchors) locations are marked with red. The RMSE of range (time delay) estimates is considered equal for all the anchors. . . . . . . . . . . . . . . . . . 34 Figure 3.1: Phase noise profile of the wirelessly synchronized secondary nodes where the reference oscillator has lower phase noise in comparison to the phase noise on the locked oscillator. From [6] © 2021 IEEE. . . . . . . . . . . . . . . . . . 35 Figure 3.2: General shape of the phase noise of an oscillator. The noise power associated with each phase noise type is represented by bi . From [6] © 2021 IEEE. . . . . 37 Figure 3.3: Typical PLL architecture with frequency synthesizer. Fref represents the in- put frequency to the PLL, this frequency can be lower version than the input reference frequency, which can be generated using frequency dividers. Fout is the output frequency of the VCO, while the Ff eedback is a divided version of the Fout . From [6] © 2021 IEEE. . . . . . . . . . . . . . . . . . . . . . . . . 39 Figure 3.4: PLL represented using a negative feedback model. . . . . . . . . . . . . . . . . 41 Figure 3.5: A three element LPF with resistive and capacitive components. . . . . . . . . . 42 Figure 3.6: PLL noise model. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 Figure 3.7: PLL phase noise shape base on the selection of the loop bandwidth, where fLF represents the frequency of the loop filter of the PLL. The bandwidth can be: (a) optimum, (b) large, (c) narrow. . . . . . . . . . . . . . . . . . . . . 44 xi Figure 3.8: Typical PLL phase noise profile. In the first region, the phase noise of the reference signal dominates the PLL phase noise; on the other hand, the close- in noise dominates region 2 leading to an increase in the PLL noise floor; finally, region 3 is dominated mainly by the phase noise of the locked VCO. fLF , which represents the frequency of the loop filter of the PLL, needs to be selected appropriately as shown in Fig. 3.7 to minimize the phase noise of the PLL. From [6] © 2021 IEEE. . . . . . . . . . . . . . . . . . . . . . . . . . 45 Figure 3.9: Example of a clock jitter for the real case compared to the ideal time domain performance of a clock where no jitter is observed. . . . . . . . . . . . . . . . . 45 Figure 3.10: AVAR profile for an oscillator. τf,min represents the instance where the AVAR is mainly dominated by the random walk FM noise. Later on, when analyzing the RMS frequency drift, it is important to select a value τf ≥ τf,min . From [6] © 2021 IEEE. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48 Figure 3.11: Phase noise of the locked PLLs for architectures: (a) 1, (b) 2, and (c) 3. In architecture 1, VCO A was being locked by the PLLs of the secondary nodes and it was used to generate the reference signals. In architecture 2, VCO A was replaced by VCO B. In architecture 3, VCO B was used to generate the reference signals, while VCO A was being locked by the PLLs of the secondary nodes. In all the architectures, the loop filter was assumed to be designed appropriately to minimize the phase noise of the PLLs. . . . . . . . . 54 Figure 3.12: Coherent gain for various carrier frequencies for arrays of sizes N = 2, 10, 20, and 100 with continuous locking for architecture 1 (using the high phase noise oscillator as the reference and PLL VCO) with the phase noise profile from Fig. 3.11a. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55 Figure 3.13: Coherent gain for various carrier frequencies for arrays of sizes N = 2, 10, 20, and 100 with continuous locking for architecture 2 (using the low phase noise oscillator as the reference and PLL VCO) with the phase noise profile from Fig. 3.11b. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56 Figure 3.14: Coherent gain for various carrier frequencies for arrays of sizes N = 2, 10, 20, and 100 with continuous locking for architecture 3 (using the low phase noise oscillator as the reference and the high phase noise oscillator as the PLL VCO) with the phase noise profile from Fig. 3.11c. . . . . . . . . . . . . . 57 Figure 3.15: Coherent gain for CW and BPSK signals with periodic synchronization, where the jitter and frequency drifts (CW and BPSK), and timing errors (BPSK) were considered for nodes equipped with VCO A. Array size N = 100 was selected. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58 xii Figure 3.16: Coherent gain for CW and BPSK signals with periodic synchronization, where the jitter and frequency drifts (CW and BPSK), and timing errors (BPSK) were considered for nodes equipped with VCO B. Array size N = 100 was selected. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59 Figure 3.17: Carrier frequencies supporting a coherent gain of 0.9 for periodic synchro- nization with array sizes N = 2 and 100, where the jitter, frequency drifts and timing errors were considered for nodes equipped with either VCO A or B. . . . 60 Figure 4.1: The adjunct self-mixing circuit that was used by the secondary nodes to lock their frequencies to that of the primary node. This circuit receives two-tone signals with 10 MHz tone separation and outputs a 10 MHz signal that can be fed to the input of the PLL (REF IN). A splitter was used at the input to split the received signals into ranging and frequency synchronization signals. From [7] © 2021 IEEE. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64 Figure 4.2: Block diagram of the adjunct self-mixing circuit for wireless frequency syn- chronization (adapted from [8]). . . . . . . . . . . . . . . . . . . . . . . . . . . 64 Figure 4.3: Block diagram of the experimental setup used for testing the spectrum at the output of the frequency locking circuit. . . . . . . . . . . . . . . . . . . . . . . 65 Figure 4.4: Power spectral density of both the reference clock, and the output of the proposed frequency locking circuit. From [9] © 2021 IEEE. . . . . . . . . . . . 65 Figure 4.5: Block diagram of the experimental setup used for testing the CW 10 MHz output signals from both frequency locked and frequency unlocked SDRs. . . . 66 Figure 4.6: Power spectral density of the reference clock, the locked clock, and the un- locked one. From [9] © 2021 IEEE. . . . . . . . . . . . . . . . . . . . . . . . 66 Figure 4.7: The effect of mismatch between dLO and dRF on the 10 MHz output from the mixer. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68 Figure 4.8: The observed Doppler shift for 10 MHz, 1 GHz, and 10 GHz signals in the case where the relative velocity between a primary and a secondary node is varied from 0 to 10 m/s. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70 Figure 4.9: Block diagram of the setup used to measure the phase shift produced by the frequency synchronization circuit when the antenna receiving the two-tone signal is displaced. From [8] © 2021 IEEE. . . . . . . . . . . . . . . . . . . . 72 Figure 4.10: Experimental results showing the phase shifts between the primary and sec- ondary nodes signals for multiple displacement distances. From [8] © 2021 IEEE. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72 xiii Figure 5.1: The ideal spectrum of the two-tone waveforms expressed in (5.10). . . . . . . . 75 Figure 5.2: The minimum achievable standard deviation/RMSE for two-tone CW rang- ing signals for various values of E/N0 . From [10] © 2021 IEEE. . . . . . . . . 77 Figure 5.3: The ideal spectrum of the two-tone waveforms expressed in (5.17). . . . . . . . 78 Figure 5.4: The ideal spectrum of the four-tone waveforms expressed in (5.18). . . . . . . . 78 Figure 5.5: The ideal spectrum of the three-tone waveforms expressed in (5.18). . . . . . . 78 Figure 5.6: (a) The in-phase part of a two-tone signal with f1 = 53 kHz, f2 = 1.03 MHz, and a sampling frequency of fs = 2.5 MHz showing discretizations equiva- lent to two different delays. (b) Output of the matched filter for both scenar- ios, showing the error due to discretization. . . . . . . . . . . . . . . . . . . . . 85 Figure 5.7: (a) Two-tone ranging pulse with f1 = 20 kHz and f2 = 7.52 MHz, along with a disambiguation pulse with fd = 1.875 MHz. These signals were sampled at 25 MSps. (b) Output of the matched filter for both the ranging pulse and the disambiguation pulse. From [10]. . . . . . . . . . . . . . . . . . . . . . . . 87 Figure 5.8: RSS of the normalized matched filter outputs for three time delay cases for a ranging waveform with f1 = 20 kHz, f2 = 4.5 MHz, fs = 10 MHz, and T = 99.8 µs. The three selected time delay cases reside within the same two discretization points. Every expected matched filter output h corresponds to a predicted time shift dth . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92 Figure 5.9: Experimental SDR setup. Extra cables were added to modify the received signal time delay. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93 Figure 5.10: Simulated and experimental results showing a comparison between the ac- tual distances traveled by the signals and their estimates. The surface shows the simulated results, and the red dots show the experimental results of inter- polation. From [11] © 2021 IEEE. . . . . . . . . . . . . . . . . . . . . . . . . 94 Figure 5.11: Simulated and experimental results showing a comparison between the actual distances traveled by the signals and their estimates. The surface shows the simulated results, and the red dots show the experimental results of NL-LS. From [11] © 2021 IEEE. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94 Figure 5.12: Simulated and experimental results showing a comparison between the actual distances traveled by the signals and their estimates. The surface shows the simulated results, and the red dots show the experimental results of MF-LS. From [11] © 2021 IEEE. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95 xiv Figure 5.13: (a) The errors from Figs. 5.10, 5.11, and 5.12 for the various peak estima- tion algorithms for different cable lengths and bandwidths (average of 100 estimates). (b) Time delay based on the cable length (dotted orange line) and signal bandwidth (solid blue line) per sample. . . . . . . . . . . . . . . . . . . 96 Figure 5.14: The standard deviation for each ranging method along with the CRLB. The SNR for the first 100 samples was 32 dB and then it decreased by around 2.9 dB for every added cable (every 100 samples), reaching an SNR of 14.6 dB. For some samples, the standard deviation of both the interpolation and NL-LS is smaller than the CRLB, this is due to the bias in the estimates. From [11] © 2021 IEEE. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97 Figure 5.15: Time domain of a TTSFW with Np = 5 and 50% duty cycle. From [7] © 2021 IEEE. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98 Figure 5.16: Time-frequency spectrum of a TTSFW with Np = 5 and 50% duty cycle. From [7] © 2021 IEEE. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98 Figure 5.17: Matched filter output for TTSFW with Np = 4 compared to the matched filter output of pulsed two-tone waveforms (PTTW). From [7] © 2021 IEEE. . . 99 Figure 5.18: Adaptive inter-node ranging architecture that relies on the standard deviation of the range estimates. From [10] © 2021 IEEE. . . . . . . . . . . . . . . . . . 101 Figure 5.19: A simplified representation of the used PID controller. In this figure, y[n] represents the system output, which is the standard deviation of the time- delay estimates of multiple consecutive ranging pulses. . . . . . . . . . . . . . 104 Figure 5.20: Block diagram representing the hardware used for the indoor adaptive rang- ing experiment. From [12] © 2021 IEEE. . . . . . . . . . . . . . . . . . . . . . 106 Figure 5.21: Experimental setup for adaptive ranging system measurements. . . . . . . . . . 107 Figure 5.22: Ranging system behavior to the proportional gain Ku . . . . . . . . . . . . . . . 108 Figure 5.23: The behavior of the adaptive ranging system for a varying SNR. . . . . . . . . . 110 Figure 5.24: The behavior of the adaptive ranging system for a varying reference point. . . . 111 xv Figure 5.25: (a) Without phase and frequency synchronization, the nodes in the array can- not align their phases, as it is impossible to prevent the frequencies on each node to relatively drift, and even when the frequency seems to be stable, it is not possible to transmit coherent phases. (b) Once both phase and frequency are synchronized, it is possible to transmit a comment coherent frequency to the target, and ranging is part of localization that enables phase alignment. From [10]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112 Figure 5.26: Block diagram for the outdoor adaptive ranging experiment, where wireless frequency synchronization was implemented. Node A is the primary node, while node B represents the secondary node. Wireless frequency synchro- nization two-tone CW signals was transmitted at 910 MHz and 920 MHz, and the ranging two-tone adaptive waveform was transmitted initially at 2.45 GHz carrier and it was received at 5.8 GHz carrier. From [13] © 2021 IEEE. . . . . . 115 Figure 5.27: The outdoor adaptive inter-node cooperative ranging architecture. The same approach as in Fig. 5.18 was used, with the addition of a wireless frequency locked repeater. From [10]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116 Figure 5.28: Each nodes was equipped with two antennas, one for transmit (TX) and one for receive (RX). The antennas on every node were separated by around 2 m, and the separation between the antennas of each node was 90 m. From [10]. . . 116 Figure 5.29: The hardware that was used to operate node A, including two SRDs. . . . . . . 117 Figure 5.30: The hardware that was used to operate node B, including one SDR and the adjunct self-mixing circuit. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117 Figure 5.31: an image showing the weather sensor Davis 6250 Vantage Vue that was at- tached to the receive antenna of node A. . . . . . . . . . . . . . . . . . . . . . 119 Figure 5.32: (a) Ranging standard deviation for a 24 hours interval, where f1 = 20 kHz and f2 = 3.5 MHz. The effects of SNR changes and wind speed were shown in (b). Also the contribution of humidity and rain rate was presented in (c). . . . 120 Figure 5.33: Achievable coherent frequency using the obtained ranging standard devia- tion. Three probabilities for achieving coherent gain of at least 0.9 were investigated. The lines with solid colors represent the moving average with a window of 10, while the faded lines represent the individual estimates. . . . . . 121 Figure 5.34: (a) The adaptively modified second tone f2 of the transmitted ranging pulses. (b) Ranging standard deviation for a 24 hours interval, where f1 = 20 kHz and f2 was adaptively modified. The SNR changes and wind speed are shown in (c), and the humidity and rain rate are presented in (d). . . . . . . . . . . . . 122 xvi Figure 5.35: Achievable coherent frequency using the obtained ranging standard deviation in the adaptive framework. Three probabilities for achieving coherent gain of at least 0.9 were investigated. The lines with solid colors represent the moving average with a window of 10, while the faded lines represent the individual estimates. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123 Figure 5.36: (a) The adaptively modified second tone f2 of the transmitted ranging pulses. (b) Ranging standard deviation for 1 week interval, where f1 = 20 kHz and f2 was adaptively modified. The SNR changes and wind speed are shown in (c), and the humidity and rain rate are presented in (d). . . . . . . . . . . . . . . 124 Figure 5.37: Achievable coherent frequency using the obtained ranging standard deviation in the adaptive framework. Three probabilities for achieving coherent gain of at least 0.9 were investigated. The lines with solid colors represent the moving average with a window of 10, while the faded lines represent the individual estimates. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125 Figure 5.38: An example of transmitted (orange) and received (blue) two-tone signal with 11 dB SNR. From [12] © 2021 IEEE. . . . . . . . . . . . . . . . . . . . . . . . 128 Figure 5.39: A simulated example of the autocorrelation and cross-correlation of the pulsed two-tone signals that are shown in Fig. 5.38, where the ratio of amplitudes of the received to transmitted pulses is K = 0.25. From [12] © 2021 IEEE. . . . 129 Figure 5.40: Estimation accuracy of the eigenvalue decomposition-based SNR estimator. Three sampling rates were selected to evaluate the change in accuracy. From [12] © 2021 IEEE. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130 Figure 5.41: Estimation accuracy of the matched filter-based SNR estimator. One of the two tones had a ±90◦ (same results for both cases) phase shift in comparison to the filter; which leads to the lowest accuracy since only the in-phase sig- nal is being considered. Three sampling rates were selected to evaluate the change in accuracy. From [12] © 2021 IEEE. . . . . . . . . . . . . . . . . . . 131 Figure 5.42: Estimation accuracy of the matched filter-based SNR estimator. The initial phases of both the signal and filter were equal, leading to the highest accu- racy. Three sampling rates were selected to evaluate the change in accuracy. From [12] © 2021 IEEE. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132 Figure 5.43: Experimental setup for testing the proposed SNR estimation method that re- lies on the matched filter output of the received pulses. The transmitter was connected to the receiver via 2 m coaxial cable with an additional 30 dB attenuator to ensure that the received power is within the acceptable range. . . . 133 xvii Figure 5.44: Measured SNR estimates using the matched filtering approach for a single pulse and 20 sequential pulses in a SDR-based system. The SNR was con- trolled by modifying the transmitted power. From [12] © 2021 IEEE. . . . . . . 133 Figure 5.45: Adaptive ranging is necessary when high ranging accuracy and low spectral footprint are needed. This can be done by implementing three processes that ensures that only the desired bandwidth is used. A perception process is used to estimate the SNR of the received ranging pulses and convert this SNR to an estimated standard deviation. The controller compares the estimated standard deviation to a desired value and sends the necessary feedback to the action block that generates the desired waveform. This generated waveform is used for ranging, until a new waveform is generated for the next loop. From [12] © 2021 IEEE. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135 Figure 5.46: Measured standard deviation of the time delay estimator versus the scaled CRLB curve, demonstrating that the CRB from (5.16) can be scaled using a constant parameter to match the observed standard deviation. This exper- iment was done where no vibration or interference were present. From [12] © 2021 IEEE. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136 Figure 5.47: Measured RMSE of ranging accuracy versus SNR (averaged over 100 mea- surements) showing only a 3.7 ps improvement when more than one pulse is used. From [12] © 2021 IEEE. . . . . . . . . . . . . . . . . . . . . . . . . . . 136 Figure 5.48: Experimental setup for the adaptive ranging system with the perception pro- cess that relies only on the estimated SNR. The two horn antennas were used for transmission and reception, and the corner reflector represented the target. From [12] © 2021 IEEE. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137 Figure 5.49: Performance of the adaptive ranging approach with modified perception: (a) ranging performance; (b) tone separation; (c) SNR. The SNR was modified by changing the transmitted power. The adaptive system was able to keep up with the change in SNR until a higher bandwidth than what is available was needed. From [12] © 2021 IEEE. . . . . . . . . . . . . . . . . . . . . . . . . . 138 Figure 6.1: Block diagram of the setup used to measure the performance of the dis- tributed two-node beamforming experiment. From [14] © 2021 IEEE. . . . . . 141 Figure 6.2: Image of the experimental setup in the semi-enclosed arch range showing the distributed two-node beamforming experiment. From [14] © 2021 IEEE. . . . . 142 xviii Figure 6.3: Beamformed signals from two transmitting nodes at a 1.5 GHz carrier with a 100 kHz CW baseband signal. The secondary node was wirelessy frequency synchronized to the primary node using the adjunct self-mixing circuit, and the received phases were manually adjusted to achieve the highest possible coherent gain at the receiver. . . . . . . . . . . . . . . . . . . . . . . . . . . . 143 Figure 6.4: Experimental results showing the coherent summation of the primary and secondary node signals for the case where the two-tone transmitter M2 was displaced. The transmitted signals were connected to the oscilloscope via cable and the data was recorded for different displacement distances. From [14] © 2021 IEEE. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144 Figure 6.5: Coherent beamforming results showing the combined signal power of the two transmitted signals for multiple displacement distances of the primary node transmitter M1. From [14] © 2021 IEEE. . . . . . . . . . . . . . . . . . . 145 Figure 6.6: Coherent beamforming results showing the combined signal power of the two transmitted signals for multiple displacement distances of the primary node. From [14] © 2021 IEEE. . . . . . . . . . . . . . . . . . . . . . . . . . . 146 Figure 6.7: (a) Time domain representation of the localization waveform. (b) Frequency content of the localization waveform. . . . . . . . . . . . . . . . . . . . . . . . 148 Figure 6.8: Schematic of the distributed array used in this study, consisting of a primary and secondary node, beamforming to an arbitrary angle. . . . . . . . . . . . . . 149 Figure 6.9: 50,000 Monte Carlo simulation showing various threshold values for achiev- ing coherent gain. The probabilities were estimated as a function of angle and localization standard deviation. The strictest requirements for localiza- tion uncertainty appear at 90◦ . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150 Figure 6.10: Schematics showing the connections and setup of the distributed nodes and target. This setup was used for the wireless beamforming experiment. From [15] © 2021 IEEE. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150 Figure 6.11: Open-loop coherent distributed array setup showing the target, primary, and secondary nodes. The antennas in this setup were connected to multiple SDRs, an Oscillator, and a frequency locking circuit. From [15] © 2021 IEEE. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151 Figure 6.12: Experimental results showing the coherent beamforming results for the case where the secondary node was displaced. In this scenario, the signals were transmitted using log-periodic antennas, and their were received using a horn antenna. From [15] © 2021 IEEE. . . . . . . . . . . . . . . . . . . . . . . . . 153 xix Figure 7.1: Diagram showing the proposed TOA experiment, where the secondary node localizes the primary node using TTSFW signals. From [1] © 2021 IEEE. . . . 155 Figure 7.2: Experimental setup. From [1] © 2021 IEEE. . . . . . . . . . . . . . . . . . . . 158 Figure 7.3: Square root of the summation of lower bounds for estimating x and y coor- dinates of the target at multiple locations. From [1] © 2021 IEEE. . . . . . . . 158 Figure 7.4: Ten thousand Monte Carlo simulations showing the average achievable co- herent gain for 1 GHz coherent signals using the CRLB results for the local- ization estimates. Arrays of 2 and 6 nodes were analyzed. From [1] © 2021 IEEE. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159 xx LIST OF ALGORITHMS Algorithm 5.1: NL-LS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89 Algorithm 5.2: MF-LS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90 xxi CHAPTER 1 INTRODUCTION Wireless technologies have advanced dramatically in the past few years allowing distributed co- operation between separate subsystems, leading towards enabling coherent distributed operations. Cooperation at the wavelength level of the wireless operation enables a multitude of subsystems to transmit and/or receive coherent signals, forming a coherent distributed array. Coherent distributed arrays, such as the one shown in Fig. 1.1, operate in essence as distributed phased arrays where the signal power gain is relative to the number of transmitters squared multiplied by the number of receivers. In contrast to single-platform phased arrays [16, 17], receiving coherent distributed ar- rays operate comparably to element-level digital phased arrays, where signals are collected by the analog-to-digital converter (ADC) at each element and then are combined in processing [18, 19]. Coherent distributed arrays introduce a paradigm shift to the current focus on wireless systems that rely on platform-centric antennas [20, 21]. Improvements to platform-centric systems focus on enhancing the characteristics of the system rather than scaling it by combining multiple inde- pendent elements, this leads to bulky, costly, inefficient systems that are prone to a single element failure and are hard to conceal or relocate. Coherent distributed arrays are scalable in that the power gain can be adjusted by adding or removing elements from the array [22, 23]. They also prevent single point-of-failure problems, where it is possible to remove or shut down one element without perturbing the operation of the array. In addition, selected nodes can be relocated to adapt to any changes and to cover various areas and various applications, where the nodes can be disag- gregated into multiple sub-categories that perform different tasks when more than one operation is required. Applications of distributed beamforming include satellite-satellite and satellite-terrestrial communications [24], surveillance and information transfer using a drone swarm [25–27], indoor and outdoor fifth generation (5G) communications [28, 29], among others. 1 Synchronization signals from the target Target Inter-node synchronization signals Coherent distributed array Figure 1.1: General architecture of coherent distributed arrays where the nodes achieve synchro- nization using inter-node signals and external signals from the targeted location. Once all the nodes are synchronized, the transmitted signals add coherently at the target and provide an increased sig- nal power. Figure 1.2: Two ideal received signals along with their summation from two perfectly synchronized nodes. 1.1 Coordination of Principal Parameters Accurate coordination of disaggregated nodes is crucial to the operation of the distributed array. The principal electrical states that require synchronization are: time [30–33], phase [2, 34–39] and frequency [9, 14, 34, 35, 40]. Fig. 1.2 shows the individual representation along with the summation of two ideal received signals from two perfectly synchronized nodes. This shows how the amplitude of the received signals doubles which leads to a multiplicative gain for the received 2 power. Figure 1.3: Two ideal received signals along with their summation from two nodes performing phase and frequency synchronization without time alignment. 1.1.1 Time Alignment Coherent distributed antenna arrays require appropriate time alignment so that the pulses or sym- bols overlap sufficiently in order to maximize the transmitted or received power. Generally, time alignment requires the least precision compared to phase alignment and frequency synchronization since the requirements for time alignment are imposed by the information or symbol rate rather than the transmitted frequency. Nevertheless, timing errors of no more than nanoseconds or some- times picoseconds are required for wideband signals with hundreds of MHz of bandwidth [41]. Fig. 1.3 shows the individual representation along with the summation of two ideal received signals from two nodes with only phase and frequency synchronization. Timing is significantly misaligned in this case, resulting in a distorted signal summation that can prevent the receiver from correctly estimating the received symbols. The timing on every node is generated using internal clocks that are controlled by one or multi- ple oscillators. Oscillators, if not well synchronized, tend to drift over time. Even for synchronized oscillators, time alignment is necessary to remove any constant phase shift that could arise from the startup of the system, the addition of a node, or from motion induced time shifts. A simplistic 3 model describing the clock shift in a 2 node system is given by [42] C1 (t) = a · C2 (t) + b, (1.1) where C1 and C2 represent the clock time of nodes 1 and 2, a represents the relative time drift, and b is the relative offset between the two clocks. In an ideal scenario, we should have a = 1 and b = 0. Time alignment in general is feasible using Global Navigation Satellite System (GNSS) and Network Time Protocol (NTP). Nevertheless such systems cannot always offer the desired accu- racy, especially for the applications of coherent distributed arrays. In addition, time alignment using GNSS signals is not always a viable option since satellite signals are not detectable in many locations such as in indoor applications. Various time alignment techniques have been developed to improve the synchronization accu- racy and reduce the cost, hardware, and complexity observed in the previously mentioned tech- niques. Initially, the time alignment algorithms focused on transmitting timestamps from one node to the other such as in The Timing-Sync Protocol for Sensor Networks (TPSN) [43], Tiny-Sync (TS) and Mini-Sync (MS) [44], among others. Timestamp free approaches have been recently introduced to improve the accuracy, simplicity, and flexibility. Also, the approaches that rely on timestamps exchange use the medium access control (MAC) layer to process the synchronization signals which provide timing errors on the order of microseconds [42]. Moving the functionality to the physical layer (PHY) increases the synchronization speed and accuracy dramatically, which could be done for timestamp-free approaches sucha as in [45–47]. Standard deviations of less than 100 ps were observed in [41] while implementing a timestamp-free synchronization approach that uses linear frequency modulated (LFM) pulses with 1 µs pulse width and 40 MHz bandwidth over a 1.44 GHz carrier. 4 Figure 1.4: Two ideal received signals along with their summation from two nodes performing time and frequency synchronization without phase alignment. 1.1.2 Phase Alignment Phase alignment is the most challenging task in a coherent distributed array compared to the syn- chronization of the other electrical states since it is in general harder to achieve in open-loop systems and it requires the errors to be more than one order of magnitude less than the transmitted wavelength. It was shown in [48] that no more than 18◦ root-mean-square error (RMSE) for phase offset of the received signals is permitted in order to have a high probability to achieve 90% co- herent gain, where the coherent gain represents the received power at the target over the maximum achievable power. The effect of the signal summation with phase offset is shown in Fig. 1.4. As can be seen, the phase offset between the two transmitted signals resulted in signal summation with low amplitude/power. Phase alignment has been achieved using multiple architectures, the majority relying on feed- back signals from the target [2, 36–39], while others were achieved using inter-node synchro- nization signals [14]. In general, before implementing phase synchronization, the nodes need to be frequency locked/synchronized so that relative phase shifts stay constant throughout the calibration process. Furthermore, less complicated synchronization approaches are needed for static nodes, on the other hand, in a dynamic array, which is the general case, fast algorithms are needed so that the phases can continuously align. 5 Figure 1.5: Two ideal received signals along with their summation from two nodes performing time and phase synchronization without frequency synchronization. 1.1.3 Frequency locking Frequency synchronization is essential for distributed systems since the nodes transmit signals stemming from local oscillators. Oscillators in general tend to operate at a slight offset from the nominal frequency, drift over time, and might have high phase noise that degrades the performance of a coherent distributed array. Fig. 1.5 depicts the effect of having two local oscillators operating with a frequency mismatch. By combining the two transmitted signals with frequency offset, the amplitude of the combined signals was changing sinusoidally, where at some instances the ampli- tude was equal to the summation of both transmitted signal amplitudes and at other the amplitude was close to zero. Thus frequency synchronization is essential to ensure a coherent summation of the signals at the targeted location. knowing that the nodes are scattered and have no physical connection, frequency synchronization has to be implemented wirelessly. Wireless frequency syn- chronization is common for wireless systems and it is commonly used for orthogonal frequency division multiple access (OFDMA) to ensure orthogonality and suppress multiple access and inter- channel interferences [49]. In coherent distributed arrays frequency synchronization is necessary as no time or phase align- ment is possible unless all the nodes operate on the same frequency. A common frequency syn- chronization approach is to assign to each node a time slot which is used to compare the relative 6 phase [50] or frequency [51] to a reference. In [50], the measured phases were fed to a Kalman fil- ter to predict the phase and frequency shifts and correct them. For instance, in [52], frequency and phase were monitored based on the received power at the target, necessary frequency and phase adjustments were made to ensure that the power stays above a certain threshold. Other forms of frequency synchronization where no time slots are required include the use of coupled-oscillators [53]. Nevertheless, coupled-oscillators under-perform when nodes are widely separated and they are limited to close range operation. Another option would be to use optically- locked voltage controlled oscillators [54], which would suffer as well from long range synchro- nization where pointing, tracking, and acquisition require special hardware on each node. On the other hand, long range frequency synchronization is achievable using GNSS signals such as Global Positioning System (GPS) [55]. Yet, as mentioned earlier, GNSS signals are not always available and even when available they tend to introduce a high phase noise that prevents the distributed array from operating coherently, especially at high frequencies. Centralized approaches where frequency synchronization is achievable using inter-node sig- nals was realized in [56]. Secondary nodes that were represented by either software-defined radios (SDRs) or other similar hardware were equipped with phase-locked loops (PLLs) that can lock the frequency of the local oscillator using an incoming frequency reference signal. In this approach the primary node had to transmit to the secondary nodes a continuous wave (CW) with a frequency equal to that of the desired reference frequency signal in order to lock their local oscillators. This approach was deemed impractical since the transmitted frequency was limited to only one option, which is almost always a very low frequency. This approach was improved in [57] by cascading two PLLs where the first one was used to scale the received frequency to the desired reference frequency using multipliers and dividers. This approach improved the flexibility but still trans- mitting one tone was prone to high interference levels. In [9, 14, 40] the secondary nodes were synchronized using two-tone signals and an adjunct self-mixing circuit that extracts the reference frequency before feeding it the PLL. In general, the discussed centralized approaches make it eas- ier to align the phase of the distributed nodes for multiple beamforming architectures, however, 7 Coherent Beamforming Channel Distributed Array Feedback Channel Target Target Channel Beamforming Sounding slots Beamforming Beamforming Channel Establishes Channel Time Feedback Timeslots Estimates Channel Figure 1.6: Receiver-coordinated beamforming using explicit-feedback (adapted from [2]). The re- ceiver allocates the channel sounding period time slots along with the beamforming time slot. The nodes transmit an uncompensated carrier in their allocated sounding slot and the target transmits back the feedback signals afterwards to synchronize their phase and frequency. Once synchronized the nodes beamform at the dedicated beamforming slot. From [3] © 2021 IEEE. they are prone to one-element failure and they might be inefficient for widely spread nodes. A decentralized wireless frequency approach was introduced in [58], where the nodes in an array synchronize their frequency by reaching a final average frequency among all the nodes. In this approach not all nodes have to be connected, and even with one or multiple element failures, it is always possible to synchronize the frequency of the elements in an array as long as every node has at least one connection to the remaining nodes. Yet, decentralized approaches might present some impediments for phase and time synchronization. For instance, the initial phase and frequency synchronization duration tend to be higher than the duration required to synchronize centralized systems, as many synchronization cycles are needed (compared to one cycle for many centralized architectures) to reach a consensus on the desired phase and frequency. 1.2 Architectures of Coherent Distributed Arrays Coherent distributed arrays are divided into either closed-loop or open-loop architectures. This classification is dependent on the approach used to synchronize the nodes in the array; when a feed- back from an external system, typically the target, is used to synchronize the array, the architecture is referred to as closed loop. This category dominates the coherent distributed array literature as it is relatively easy to apply. Multiple closed-loop approaches, where a coherent feedback from 8 𝑡3 𝑡4 𝑡1 Secondary 1 𝑡4 𝑡0 𝑡3 Target 𝑡4 𝑡0 Primary 𝑡3 𝑡2 Secondary 2 Channel estimation between primary and secondary nodes. Channel estimation from a random message with a known preamble from the target. Coherent signals. Figure 1.7: Retrodirective beamforming where time slots are assigned so that the nodes can syn- chronize their frequencies to that of the target. The channels are estimated between the nodes and the target, a phase offset is applied so that the secondary nodes are phase aligned to the phase of the primary node. From [3] © 2021 IEEE. the target is used, were developed, such as: one-bit feedback [36], roundtrip synchronization [59], primary-secondary synchronization [56], and two-way synchronization [60], receiver-coordinated explicit-feedback [2], to name a few. For instance, in [2] a time slotted approach was implemented to synchronize the carrier phases of the transmitting nodes using a feedback from the receiver as illustrated in Fig. 1.6. At first, the target establishes the channel sounding and beamforming time slots. Every node transmits its uncompensated carrier at its dedicated sounding time slot and the receiver determines the phase offset at the carrier frequency. The receiver transmits back the estimated phases and every node estimates its phase and frequency offsets using a Kalman filter. Afterwards, in the beamforming time slot, phase and frequency offsets are added for the transmit- ters of every node before beamforming at the target/receiver. This approach was tested using a CW signal at 910 MHz over a 1 km distance of separation between the coherent distributed array and the target/receive. Furthermore, a closed-loop system can be sub-categorized as a retrodirective distributed beam- former. Retrodirective beamforming covers the techniques that rely on a feedback from the target to synchronize the array, but the received signal is not cooperative as is the case for [2,36,56,59,60]. By analyzing incoming signals from the target or external devices, the channel gains or frequency 9 𝑦 𝜃0 Secondary 1 𝜃1 𝜃0 Primary 𝑥 𝜃2 𝜃0 Coherent signals Secondary 2 Time, phase, and frequency synchronization signals Figure 1.8: Open-loop distributed phased arrays beamforming at an angle θ0 in the far field. Time, phase, and frequency are synchronized using internode signaling in a centralized architecture. Lo- calizing the primary and/or the secondary nodes is important to align the phases. is adjusted. As an example, retrodirective array would synchronize its frequency by synchronizing the local oscillators to a strong signal incoming from an external device. Pon array [61] is a form of retrodirective coherent distributed array where reciprocity is used to mitigate the propagation phase delays and frequency offsets. In [37], stationary nodes were synchronized using a hierar- chical model, where at first, the primary node had a dedicated time-slot, afterwards the secondary nodes were each assigned a time slot with the target having the final time slot before beamforming could take place as shown in Fig. 1.7. Initially, the secondary nodes synchronize their frequency to that of the primary node and afterwards they estimate their channel parameters. Afterwards, all the nodes calibrate their phases using the non-cooperative broadcast signal from the target. Once all the nodes are calibrated, beamforming is possible. It was possible to achieve a beamforming gain higher than 90% of the ideal beamforming gain after 10 s of synchronization period. Other examples of retrodirective coherent distributed arrays are available in [62, 63]. 10 Closed-loop architectures, whether feed-back enabled or retrodirective, are limited to commu- nications applications where an external system, usually the target/receiver, has to assist the nodes to synchronize their local electrical states. Furthermore, closed-loop architectures cannot coher- ently beamform to arbitrary directions since they require feedback from the desired transmission direction. Contrary to closed-loop approaches, open-loop coherent distributed arrays [14, 48] have no de- pendency on the target or any external systems. This expands the application space of open-loop topologies to include radar and remote sensing applications and other communications applications where the receiver does not have the means to transmit feedback signals. Also, open-loop coher- ent distributed arrays can beamform to arbitrary directions since they have no dependency on the location of the target. As open-loop coherent distributed arrays synchronize their electrical states within the array, accurate node localization is crucial since phase alignment relies on the relative positions of the nodes in the array. Coherent distributed arrays in general and open-loop coher- ent distributed arrays specifically can be grouped as either centralized or decentralized systems. Centralized open-loop systems, such as what is shown in Fig. 1.8, are easier to synchronize as all the electrical states can be synchronized based on a reference node, which is usually referred to as primary or master node [14, 64]. Nevertheless, such architectures can suffer from single-point fail- ures, unless there is a mechanism that allows other secondary nodes to be reassigned as a primary node. Decentralized architectures are an alternative to centralized systems, where the operations in the array are divided equally among all the nodes. On the other hand, phase synchronization in decentralized open-loop coherent distributed arrays is challenging, and so far it was not achieved yet. Nevertheless, synchronization of other electrical states such as frequency locking has been demonstrated in a decentralized approach [58, 65] which opens the door for future decentralized open-loop coherent distributed arrays. 11 1.3 Main Contribution of this Dissertation This dissertation focuses on analyzing and implementing centralized open-loop coherent distributed antenna arrays. First, I analyze the localization requirements for phase synchronization in Chap- ter 2. As mentioned earlier, phase synchronization in open-loop approaches is mainly achieved by localizing the nodes in the array. In a centralized topology, the nodes synchronize their phases relatively to that of the primary node. Phase alignment is possible by tracking the relative move- ment of the nodes; the secondary nodes should at least localize the primary node to track the phase shift induced from relative motion or other factors. Based on the desired frequency of operation, the ranging accuracy must meet certain thresholds to achieve a high coherent gain. Chapter 3 fo- cuses on the frequency synchronization intervals that allow the coherent operation of the array. The analysis is focused on frequency synchronization for nodes equipped with PLLs. Chapter 4 describes the centralized frequency synchronization approach that was implemented using adjunct self-mixing circuits and PLLs internal to the SDRs representing the nodes in the array. The phase shift produced by the self-mixing circuit in a dynamic array is analyzed. Accurate range estimation using two-tone waveforms is analyzed and demonstrated in Chapter 5. An adaptive framework that minimizes the spectral footprint (bandwidth) while maintaining the desired ranging accuracy to support beamforming operations is shown. Proof-of-concept open-loop beamforming experiments are shown in Chapter 6 along with the achieved coherent gain. Chapter 7 discusses a localiza- tion technique that enables two-dimensional tracking of the primary node and relates the obtained localization accuracies to the achievable coherent gain. The principal contributions of this work include the following: • Synchronization of open-loop coherent distributed arrays is challenging, moreover, the syn- chronization performance has a direct effect on the coherent gain of the system. In this dissertation, I analyze thoroughly the desired RMSE for ranging to achieve certain thresh- olds of coherent gain for the case where wireless frequency synchronization induces phase shifts in a dynamic array. Furthermore, I extend the analysis to localization where the an- 12 gle of arrival needs to be estimated. In addition to the analysis on ranging and localization, I analyze the desired frequency and phase stability for multiple generic voltage controlled oscillators (VCOs), and I present a generalized analysis that helps identifying the desired frequency synchronization rate for a multitude of case scenarios. • I present an adaptive ranging approach that supports distributed beamforming over a long range. The adaptive ranging waveform is based on a spectrally sparse two-tone waveform with a variable bandwidth. A proportional–integral (PI) controller, which is a special case of the proportional–integral–derivative (PID) controller [66, 67] is employed to adaptively modify the ranging waveform. Cooperative ranging was performed between two nodes over a 90 m separation in an outdoor setup. The two cooperative nodes were wirelessly frequency locked and it was possible to achieve and maintain a ranging standard deviation of 1 cm for a one week period. This adaptive ranging approach minimized the spectral footprint, where, the bandwidth of the ranging waveform was minimized while achieving the desired ranging accuracy. The maximum tone separation of the two-tone ranging signals was set to only 7.5 MHz. • The first fully wireless open-loop coherent distributed array proof-of-concept is demon- strated experimentally in this dissertation. Coherent beamforming of CW signals was pos- sible by synchronizing the phase and frequency of the distributed nodes in a centralized architecture. The secondary node(s) can synchronize its/their frequency using the presented adjunct self-mixing circuit. Using the presented ranging approach, the phase shifts produced form both moving the node transmitters and moving the wireless frequency synchronization antennas were tracked and corrected for. Beamforming at 1.5 GHz was possible with at least 0.9 coherent gain for most of the captured data. • An accurate two-dimensional localization approaches based on time of arrival (TOA) that could enable beamforming in the future was implemented. The achieved localization ac- curacies were analyzed in terms of the maximum achievable coherent frequency with high 13 coherent gain. 14 CHAPTER 2 LOCALIZATION REQUIREMENTS FOR OPEN-LOOP COHERENT DISTRIBUTED ARRAYS In this chapter I discuss the coherent gain, a common metric for evaluating the performance of coherent distributed arrays, and I analyze the localization accuracies needed to achieve high levels of power gain. First, the requirements for ranging accuracy are analyzed separately from angle estimation in centralized open-loop topologies, which would be the case if the angles of the sec- ondary nodes relatively to the primary node are given. Afterwards, an analysis on the requirements for localization is presented. 2.1 Coherent Gain The performance of distributed beamforming operation can be characterized by evaluating the signal-to-noise ratio (SNR) or bit error rate (BER) of the received signal; other application-specific metrics can be used for evaluation as well. Nevertheless, evaluating the power gain in the main- beam of the beamformed signals where phase, frequency, and timing errors are present, in compar- ison to the ideal case where no degradation in the signals summation is observed, is a straightfor- ward and general approach that can be used to evaluate the general performance of the distributed antenna array. In particular, I evaluate the coherent gain Gc with the definition from [48] using |sr (t)sr (t)∗ | Gc (t) = , (2.1) |si (t)si (t)∗ | PN −1 where sr (t) = n=0 sr,n (t) is the summation of the N received signals with instantaneous phase PN −1 and timing errors, and si (t) = n=0 si,n (t) represents the ideal summation of the signals where no phase or timing errors are present. Although Gc (t) is time dependent, in the remainder of this dissertation it is represented by Gc . The coherent gain is bounded by 0 ≤ Gc ≤ 1, where the lower limit represents the case where a totally destructive addition of the signals is observed, and the upper limit 1 represents a transmission with perfect phase and time alignment. A general 15 representation of si,n (t) is as follows si,n (t) = hn (t)An (t)ej2πfc t , (2.2) where hn (t) is the complex valued coefficient for the nth propagation channel, An (t) is the am- plitude and modulation of the nth signal, and fc is the carrier frequency used by the transmitting nodes. sr,n (t) is expressed similarly to si,n (t) with additional phase, frequency, and timing errors. The general expression of sr,n (t) is represented by sr,n (t) = hn (t)An ((t − τn (t))ej2π[fc +δfn (t)]t+j[δϕn (t)+ϕs,n (t)−ϕc,n (t)] , (2.3) where τn (t) represents the time delay error that can have both a constant shift produced by initial time offsets in the system or the calibration process, and a variable shift produced by the time drift of internal clocks. In the majority of the cases, the time delay errors are not considered (set to 0) since this dissertation focuses on phase and frequency synchronization rather than time alignment. In practice, the timing errors are suppressed by transmitting to the target CW signals with no underlying modulation. δfn (t) represents the frequency offset, δϕn (t) combines the instantaneous frequency and phase errors resulting from multiple software-, and hardware-related factors, ϕs,n (t) is the phase shift produced by the motion of the nodes, constant synchronization errors, multipath, among other factors, and ϕc,n (t) represents the correction applied to the phase shift which can be obtained through initial calibration and node localization for open-loop topologies so that a continuous correction for the observed phase shifts can be applied. For closed-loop topologies, ϕc,n (t) is generated using the feedback from the target. Other sources of error can be added to (2.3) depending on the desired application. The majority of the parameters that were introduced or are going to be introduced in the future are time dependent, however, the time variable t is going to be omitted from these parameters from now on for the sake of simplicity. 2.2 Ranging Requirements for Centralized Open-Loop Coherent Distributed Arrays As seen from (2.3), there are multiple compounded errors that contribute to the degradation of the coherent gain. In the case where time and frequency synchronization are perfectly achieved, 16 hardware induced errors are not considered, and weather and external errors such as multipath are suppressed, one can evaluate the accuracy of phase alignment approaches on the coherent gain by considering the phase changes ϕs,n (t) in a dynamic array and the applied phase correction ϕc,n (t) which includes errors. Since this dissertation focuses on centralized open-loop coherent distributed arrays, in such topologies, phase alignment can be achieved by localizing the nodes in the array and then applying the desired phase correction ϕc,n (t) that cancels the observed phase shift ϕs,n (t). localization is achieved in a centralized manner where the secondary nodes need to localize themselves relatively to the location of the primary node. This way, it is possible to correct for the anticipated relative phase shift between the primary node and a secondary node n. Localization of the primary node can be achieved using both ranging [68–72] and angle of arrival (AOA) [73–76] or using multilater- ation approaches such as TOA [77–79], time difference of arrival (TDOA) [80–84], and frequency difference of arrival (FDOA) [85–87]. As a first step, one can evaluate the effect of ranging ac- curacy on the coherent gain by assuming that the AOA is given. This can relate in practice to an array with nodes distributed in one dimension. 2.2.1 Ranging Requirements with Wired Frequency Synchronization To evaluate the impact of errors in a direct manner, frequency synchronization can be achieved using wired connections such as in Fig. 2.1. This way, the requirements for ranging accuracy are relaxed in comparison to wireless frequency synchronization as it will be shown in Section 2.2.2. Wired frequency synchronization can be achieved by connecting a signal with the desired reference frequency to the PLL of the nodes in the array. The reference signal can be generated from the oscillators of the primary node, and it can be either a sinusoidal wave or a square wave with typical frequencies of 10 MHz or 100 MHz. In the case where no wireless phase locking/frequency synchronization is implemented, the main source for relative phase offset between the transmitted coherent signals from every prima- ry/secondary pair is the relative displacement of the transmitters on these two nodes. In a hier- 17 𝑦 𝜃0 Primary node 𝑑1ൗ Reference output 2 Frequency 𝑥 reference signal 𝑑1ൗ 2 Reference input 𝜃0 Secondary node Figure 2.1: Two nodes with a hierarchical architecture beamforming at a target in the far field. Frequency synchronization is achieved using a cabled connection, where the primary node trans- mits a reference frequency signal to lock the PLL of the secondary node. Beamforming is done by accounting for the phase shift produced by the relative motion between the two transmitters. archical topology, the phase synchronization can be achieved in a pairwise manner, where every secondary node estimates its relative phase shift due to any relative motion between the prima- ry/secondary pair. The motion-induced phase shift observed at the target from the transmitted signals of the secondary node n is obtained from 2π∆d1,n sin (θ0 ) ∆ϕs,1,n = − , (2.4) λc where d1,n is the distance separating the coherent transmitter of the secondary nodes from the coherent transmitter of the primary node, ∆d1,n is the relative displacement of the coherent trans- mitters, θ0 is the beamsteering angle at which beamforming is achieved in the far field, and λc is 18 the wavelength of the coherent signal. The phase shift ∆ϕc,1,n , which represents the estimated ver- sion of the actual phase shift ∆ϕs,1,n , is used to adjust the transmitted phase. In general, the total estimated phase shift is represented with ϕc,n ; ϕc,n is a combination of all estimated phase shifts due to the motion of the secondary node n along with the initial phase shift at the calibration stage. The desired output phase from node n is obtained by subtracting the phase ϕc,n from the transmit- ted signals. For a given relative angle between every primary/secondary pair, and a desired θ0 , it is possible to evaluate the desired ranging accuracy in order to ensure a certain level of coherent gain. The probability to achieve a certain level of coherent gain P (Gc ≥ X) is evaluated in terms of ranging accuracy, where 0 ≤ X ≤ 1 is the fraction of the ideal coherent signal power. The probability P (Gc ≥ X) can be estimated using Monte Carlo simulations, where ∆d1,n and the angle of the primary node in comparison to the secondary node are uniformly distributed over a wide distance and all the possible angles. Ten thousand Monte Carlo simulations were generated where hn , An ((t−τn ), δfn , and δϕn were set to 1, 1, 0, and 0 respectively. By doing so, it is possible to evaluate the effect of ranging accuracy while suppressing all the other error factors. The ranging accuracy is evaluated in terms of root-mean-square error (RMSE) and it is expressed in terms of λc . Expressing the ranging RMSE in terms of λc makes it possible to evaluate P (Gc ≥ X) for any frequency by just multiplying the RMSE value by the wavelength of interest. In this analysis, X was set to 0.9, which represents only 0.5 dB degradation in the maximum achievable power at the targeted location, making it a reasonable threshold to target. Fig. 2.2 shows the Monte Carlo simulation results for 2, 3, 10, and 1,000 nodes in a coherent distributed array. As it can be seen, by increasing the number of nodes in the array, the separation between coherent and incoherent transmission becomes more clear and if a probability of 90% to achieve at least 0.9 coherent gain is considered (P (Gc ≥ 0.9) = 90%), the ranging RMSE σd has to be at most 0.0667λ (λc /15) for large arrays. As seen in Fig. 2.2, errors in range estimates are the most forgiving for two-node arrays. Analysis on the coherent operations of two nodes can be of interest for multistatic or bistatic systems. A further analysis on the desired ranging accuracy for two-node arrays is shown in Fig. 19 Figure 2.2: Probability of the coherent gain exceeding 0.9 versus ranging RMSE σd in terms of the signal wavelength λ considering the effects of phase changes where no wireless frequency synchronization is considered. 2.3, where the ranging RMSE is evaluated for all the transmission angles θ0 and multiple values for P (Gc ≥ 0.9). The same settings as earlier were used for the Monte Carlo simulations. As it can be seen, and as it can be analyzed from (2.4), the angles 0◦ and 180◦ are the least affected by the degradation in ranging accuracy, while the areas around the angles 90◦ and 270◦ have more stringent constraints on σd . 2.2.2 Ranging Requirements with Wireless Frequency Synchronization So far, the analysis on the desired ranging accuracy was focused on systems that do not require wireless frequency synchronization. In practice, coherent distributed arrays have no physical con- nections, and frequency synchronization is only achievable wirelessly. As it will be seen in the next chapters, phase tracking is challenging for open-loop architectures, and it is necessary to im- plement a wireless frequency synchronization approach that enables the secondary nodes to track their relative phase shift to that of the primary node, such as in [9,14,40]. In general, instantaneous frequency shifts produce constant phase shifts, this issue can be prevented by relying on phase 20 Figure 2.3: Probability of the coherent gain exceeding 0.9 versus the transmission orientation in the case where frequency synchronization does not contribute to the phase errors. From [4] © 2020 IEEE. locking approaches such as PLLs, where the phase shifts of the oscillators of a node are a result of the phase shifts observed at the input reference frequency signals. As it will be demonstrated later, the phase shift at every secondary node is relative to the distance of displacement of the secondary node relative to the primary node. For a secondary/primary pair such as in Fig. 2.4, the phase shift is expressed as 2π∆d2,n ∆ϕs,2,n = − , (2.5) λc where d2,n is the distance separating the synchronization antennas of the secondary node and primary nodes, ∆d2,n is the relative displacement of the synchronization antennas. ∆ϕs,1,n and ∆ϕs,2,n are both additive phase shifts and the total phase shift that needs to be accounted for in case of any relative motion is expressed as ∆ϕs,n = ∆ϕs,1,n + ∆ϕs,2,n . (2.6) The effect of (2.6) is analyzed in Fig. 2.5, where 10,000 Monte Carlo simulations were evaluated for 2, 3, 10, and 1,000 nodes in a coherent distributed array. The same analysis and settings as in Section 2.2.1 were used. The same distance of displacement was selected for ∆d2,n and ∆d2,n . 21 𝑦 𝜃0 Primary node 𝑑1ൗ 2 𝑑2 Synchronization signals 𝑥 𝑑1ൗ 2 𝜃0 Secondary node Figure 2.4: Two nodes with a hierarchical architecture beamforming at a target in the far field. Frequency synchronization is achieved wirelessly, where the secondary node locks its oscillator phase to that of the incoming synchronization signals from the primary node. Beamforming is done by accounting for the phase shift produced by the relative motion between the two transmitters and between the synchronization antennas. From [4] © 2020 IEEE. The effect of adding wireless frequency synchronization is clear in that the curves representing the probability to achieve at least 0.9 coherent gain were pushed to the left side; this shows that more stringent ranging accuracies are needed in order to transmit coherently. As can be seen, for the case of 1,000 nodes (more stringent constraints), a 90% probability to achieve at least 0.9 coherent gain is reached when σd is at most 0.0385λ (λc /27). An analysis on the desired ranging accuracy for beamforming at a specific angle θ0 is studied. This analysis is similar to what was analyzed in Section 2.2.1, Fig. 2.3, however, the extra phase shift produced by moving the synchronization antennas is evaluated in this section. The results 22 Figure 2.5: Probability of the coherent gain exceeding 0.9 versus ranging RMSE σd in terms of the signal wavelength λ considering the effects of phase changes where wireless frequency synchronization is present (wireless frequency synchronization using phase locking approaches). From [4] © 2020 IEEE. are shown in Fig. 2.6. It can be seen that in contrast to the case where no wireless frequency synchronization is considered, the area around 90◦ requires more stringent ranging accuracies. This is caused by the added phase shift ∆ϕs,2,n . For the angles around 90◦ , the value of the sine function in (2.5) is close to +1, which leads to an additional phase shift in (2.6). Thus, any change in relative distance has a higher effect on the phase shift for the transmitted signals once (2.5) is considered. On the other hand, in the region around 270◦ , the value of the sine function in (2.5) is close to −1. This effect negates (fully at θ0 = 270◦ ) the phase shift observed in (2.4), making any phase shift due to the small displacement distances negligible. A similar analysis that shows the desired ranging accuracy versus both, the transmission angle and multiple P (Gc ≥ X) options is shown in Fig. 2.7. The curves representing a 90% probability to achieve at least 0.6, 0.7, 0.8, and 0.9 coherent gain are shown. The results are correlated to what is observed in Fig. 2.6. By knowing the transmission angle and the achievable ranging accuracy, one can estimate the achievable coherent gain. Nevertheless, these analyses, consider perfect frequency and time alignment, and exact knowledge of the orientation of every node, which 23 Figure 2.6: Probability of the coherent gain exceeding 0.9 versus the transmission orientation in the case where wireless frequency synchronization contributes to the phase errors. From [4] © 2020 IEEE. is not the case in practice. Nevertheless, this allows us to know the maximum achievable coherent gain, and the other error sources can be added as compounded errors in the future. Now that we have seen that the phase shift produced by the motion of the wireless frequency synchronization antennas has both an additive and a subtractive effect on the phase shift produced by the motion of the coherent transmitters, one can expect to have an inverted effect if the pri- mary node was locking its frequency to that of the secondary node (usually the secondary nodes lock their frequency to that of the primary node). This effect is depicted in Fig. 2.8, where it can be seen that a mirrored effect to the one shown in Fig. 2.6 is obtained. Thus, the wireless frequency synchronization in a two-node system can be performed adaptively where the wireless frequency circuit can be activated either on the first or on the second node depending on the desired transmission orientation to maximize the beamforming performance and minimize the ranging re- quirements. 24 𝜎d (λ) Figure 2.7: Fifty thousand Monte Carlo simulation showing various threshold values for achieving coherent gain. The probabilities were estimated as a function of angle and localization standard deviation. The strictest requirements for localization uncertainty appear around 90◦ . 2.3 Localization Requirements for Centralized Open-Loop Coherent Dis- tributed Arrays So far, it was always assumed that the orientation of the primary node in comparison to that sec- ondary node n is perfectly known. In practice, estimating the AOA accurately can be much more challenging than having accurate range estimates. Furthermore, the errors in angle estimation have a multiplicative effect for far nodes as it is going to be shown later in this section. 2.3.1 Localization using Range and Angle of Arrival To evaluate the effect of angle estimation, the setup in Fig. 2.4 is updated to the one in Fig. 2.9, where the angles between the primary node and other secondary nodes is clearly shown. Also, 25 Figure 2.8: Probability of the coherent gain exceeding 0.9 versus the transmission orientation in the case where wireless frequency synchronization contributes to the phase errors. These results consider the case where the primary node locks it frequency to that of the secondary node in a two-node array. From [4] © 2020 IEEE. (2.4) is updated to reflect this change as follows 2π∆d1,n sin (θ0 + θn ) ∆ϕs,3,n = − . (2.7) λc It is important to mention that the angles θn were already present in the earlier sections, neverthe- less their value was embedded with the value of θ0 . For this new problem formulation, the RMSE of the angle estimates is represented with σθ . To easily visualize the effect of angle errors on localization in general, it is possible to perform a variable transformation for the RMSE of range and angle to RMSE of x and y coordinates. The x and y coordinates of a primary node in comparison to the coordinates of a secondary node n are obtained from xn = dn cos (θn ) , (2.8) yn = dn sin (θn ) , (2.9) where dn is the distance between the center of the primary node and the center of a secondary node n. Transformation of the RMSE is done by assuming that the estimated values of dn and θn 26 𝑦 Secondary 1 𝜃1 𝜃0 Primary 𝑥 𝜃2 Frequency and time transfer Inter-node localization signals Secondary 2 Coherent signals Figure 2.9: Hierarchical open-loop coherent distributed array beamforming at an angle θ0 . The angles θ1 and θ2 represent the angles of the primary node relative to the planes of the secondary nodes. From [5] © 2021 IEEE. are close to their mean, such that a function f (dn , θn ) can be approximated using the expansion in first order Taylor’s series of the random variables dn and θn [88]. It is then possible to express the deviations of each sample fi (dn , θn ) of a random variable f (dn , θn ) in terms of dn,i and θn :  ∂f (dn , θn )  ∂f (dn , θn ) fi (dn , θn ) − f¯(dn , θn ) ≈ dn,i − d¯n + θn,i − θ̄n , (2.10) ∂dn ∂θn where .̄ represents the mean of a variable. Variance transformation (and consequently RMSE) is done using  2  2    ∂f (dn , θn ) ∂f (dn , θn ) ∂f (dn , θn ) ∂f (dn , θn ) σf2(dn ,θn ) ≈ σd2n + σθ2n + 2σd2n θn , ∂dn ∂θn ∂dn ∂θn (2.11) 27 p Figure 2.10: Radial error σx2n + σy2n in terms of λ for maximum and minimum coordinates of ± 10 m for x and y in the case of σdn = 0.01λ and σθn = 1◦ /m. From [5] © 2021 IEEE. where σd2n θn is the covariance between the variables dn and θn . In our case, we consider that dn and θn are uncorrelated, making σd2n θn = 0. Similarly, the covariance between two functions f2 (dn , θn ) and f1 (dn , θn ) can be calculated using [89]       ∂f1 (dn , θn ) ∂f2 (dn , θn ) ∂f1 (dn , θn ) ∂f2 (dn , θn ) σf21 (dn ,θn )f2 (dn ,θn ) ≈ σd2n + σθ2n . ∂dn ∂dn ∂θn ∂θn (2.12) Using (2.11), σxn and σyn , the variances of xn and yn , are obtained from σx2n = σd2n cos2 (θn ) + σθ2n d2n sin2 (θn ), (2.13) σy2n = σd2n sin2 (θn ) + σθ2n d2n cos2 (θn ). (2.14) Also, the covariance between xn and yn can be calculated from σx2n yn = σd2n − σθ2n d2n sin(θn )cos(θn ).  (2.15) 28 Figure 2.11: Probability of the coherent gain to exceed 0.9 versus the maximum allowed x and y coordinates, where σdn = 0.01λ and σθn = 0.1◦ /m. From [5] © 2021 IEEE. The effect of angle RMSE is shown in Fig. 2.10, where it can be seen that at larger distances, the accuracy of x and y estimates worsens dramatically. This effect is due to the RMSE of the angle estimates, as when the nodes are far apart the error becomes multiplicative. In Fig. 2.10, the RMSE of range and angle estimates is expressed in terms of the wavelength of the transmitted coherent signals, this is done to help the reader link the earlier findings to the figures in this section. As it can be deduced from Fig. 2.10, evaluating the required RMSE for range and angle estima- tion is going to be dependent on the spread area of the nodes; angle errors translate to larger posi- tional errors at longer distances; the effect of the nodes spreading on the desired ranging accuracy is minimal in comparison to angle estimation, as it is going to be shown later in this section. The maximum achievable coherent gain for N nodes, where N-1 secondary nodes localize the primary node to adjust their relative phases, was illustrated in Fig. 2.11, where σdn = σd1,n = σd2,n = 0.01λ 29 𝜎𝜃 °/m Figure 2.12: The required σθ to maintain a 0.9 coherent gain with 90% probability for 10 nodes with multiple values of σd versus the maximum allowed coordinates for x and y. From [5] © 2021 IEEE. and σθn = 0.1◦ /m. A minimum of 0.5 m separation between the primary node and any secondary node in x and y was imposed, and a maximum area of 100 × 100 m2 for the spread of the nodes was investigated. 10,000 Monte Carlos Simulations were generated where ∆d1,n = ∆d2,n , θn , and θ0 were selected as random variables with uniform distribution over the entire area. Fig. 2.12 expands on Fig. 2.11, where it shows the required angle estimation RMSE values for multiple RMSE values of range estimates in a 10 node array. the metric P (Gc ≥ 0.9) = 90% is chosen. An area up to a maximum of 100 × 100 m2 was considered. It can be seen that with σdn ≤ 0.01λ, the requirements for σθn are similar since the bottleneck in this case is σθn . On the other hand, for σdn ≥ 0.045λ it was not possible to maintain a coherent gain of 0.9 with a 90% probability for any value of σθn . This shows that angle estimation accuracy is generally the limiting 30 factor for the beamforming performance, especially when the nodes are scattered over a large area. For very large areas, the desired values for the RMSE of σθn are very small in that they cannot be achieved in practice. Thus, if the nodes are sparse, a localization approach that does not rely on angle estimation is needed. 2.3.2 Localization using Multilateration Multilateration is a localization technique that is based on comparing the estimated distances or time of arrival of the localization pulses from multiple receivers (or receivers and transmitters) to localize the target. TOA and TDOA are examples of multilateration where the time of arrival or time difference of arrival of certain signals is used to localize the target. The advantage of such systems over the systems represented in Section 2.3.1 is that extremely accurate angle estimates, which are in practice hard to achieve, are not required. Nevertheless, the localization accuracy is linked to the time delay estimation accuracy and the spread area of the transmitters/receivers or the geometry of the array. Distributed Arrays where the secondary nodes localize the primary node using both TOA and TDOA approaches are shown in Fig. 2.13. Generally, for TOA/TDOA sys- tems, the receivers (secondary nodes in our case) which are usually referred to as anchors, should have known locations. The transmitter (primary node in our case) is localized by processing its transmitted localization signals and comparing the time of arrival at the secondary nodes. The same transmitted signal can be used for frequency synchronization, time alignment, and localiza- tion (subsequently phase alignment) which offers a great advantage. Nevertheless, in practice, the locations of the secondary nodes are unknown as well. In Chapter 7, A modified version of TDOA/TOA is shown where the locations of the secondary nodes do not need to be known. The general time delay equation for target localization using TOA is [80] 1/2 [(x − xi )2 + (y − yi )2 + (z − zi )2 ] ti = , (2.16) c 31 Secondary 4 𝑥4 , 𝑦4 , 𝑧4 Secondary 1 𝑥1 , 𝑦1 , 𝑧1 Primary 𝑥, 𝑦, 𝑧 Secondary 3 𝑥3 , 𝑦3 , 𝑧3 Secondary 2 Frequency, time and localization signals 𝑥2 , 𝑦2 , 𝑧2 Coherent signals (a) Secondary 4 𝑥4 , 𝑦4 , 𝑧4 Secondary 1 𝑥1 , 𝑦1 , 𝑧1 Primary 𝑥, 𝑦, 𝑧 Secondary 3 𝑥3 , 𝑦3 , 𝑧3 Secondary 2 Frequency, time and localization signals 𝑥2 , 𝑦2 , 𝑧2 Coherent signals (b) Figure 2.13: Hierarchical open-loop coherent distributed arrays that rely on multilateration using (a) TOA and (b) TDOA measurements with circles (in (a)) and hyperbolas (representing the time difference of arrival between every pair of nodes in (b)) respectively to estimate the location of the primary node and synchronize the phase of the secondary nodes. 32 and the equation for time difference in TDOA is 1/2 1/2 [(x − xi )2 + (y − yi )2 + (z − zi )2 ] [(x − xj )2 + (y − yj )2 + (z − zj )2 ] ∆tij = − , c c (2.17) i = {1, ..., N }, j = {2, ..., N } ∧ j > i. The 2D Cramer-Rao Lower Bound (CRLB) for TOA (which is similar to that of TDOA if all the combinations of time difference of arrival are evaluated), which represents the minimum achievable localization variance (and subsequently RMSE) for non-biased estimators, is given in [82, 83]. The CRLB is dependent on the time delay estimation accuracy and the location of the receivers. Figure 2.14 shows examples of the CRLB for multiple case scenarios, this gives the reader an idea about the effects of different configurations, and make it easier to compare this localization method to the one shown in Section 2.3.1. As it can be seen a much lower CRLB is obtained in Fig. 2.14 in comparison to the CRLB in Fig. 2.10, nevertheless, the presented conventional TOA approach cannot be implemented for coherent distributed antenna arrays as the secondary nodes are dynamic and it will never be possible to have an exact knowledge of their location. A modified TOA approach that can align the phase of coherent distributed arrays is presented in Chapter 7. 33 (a) (b) (c) (d) (e) (f) p Figure 2.14: CRLB of the Transmitter localization in terms of Radial error σx2n + σy2n as a func- tion of λ for TOA systems with (a) 3 nodes and σd = 0.01λ, (b) 3 nodes and σd = 0.1λ, (c) 3 nodes and σd = 0.1λ, (d) 4 nodes and σd = 0.1λ, (e) 4 nodes and σd = 0.05λ, (f) 5 nodes and σd = 0.1λ. The secondary nodes (anchors) locations are marked with red. The RMSE of range (time delay) estimates is considered equal for all the anchors. 34 CHAPTER 3 FREQUENCY SYNCHRONIZATION REQUIREMENTS FOR OSCILLATORS AND PHASE-LOCKED LOOPS Distributed beamforming between wireless systems requires significant coordination, a principal factor affecting the coherence in a distributed array is the relative stability of the local oscillators on every node. A coherent state is reached once all the nodes are sharing the same reference frequency, which typically needs to be shared wirelessly, where the secondary nodes lock their frequencies to a primary source using signal processing or adjunct and internal circuits such as PLLs, among others. Large arrays or crowded spectrum may prevent continuous frequency synchronization. Thus, frequent frequency updates might be required during which the nodes are frequency locked and between which the frequency of the oscillators drifts. The stability of the oscillator, the PLL parameters (in case PLLs are used for frequency synchronization), and the used carrier, among others, all affect the synchronization requirements. The effect of PLL phase noise on the locked local oscillator is illustrated in Fig. 3.1. I analyze in this chapter the effect of varying the update time for various oscillator characteristics and synchronization models. A framework for analyzing the distributed beamforming performance from oscillator and PLL parameters is presented. Output Reference PLL Secondary Frequency / Phase Figure 3.1: Phase noise profile of the wirelessly synchronized secondary nodes where the reference oscillator has lower phase noise in comparison to the phase noise on the locked oscillator. From [6] © 2021 IEEE. 35 3.1 Phase Noise of and Operation of Oscillators and Phase-Locked Loops 3.1.1 Oscillator Phase Noise One of the important metrics for an oscillator is its spectral purity. Idealy, an oscillator can be described using a sine wave with nominal amplitude v0 and nominal angular frequency w0 : v(t) = v0 sin(w0 t). (3.1) In practice, the amplitude and phase fluctuations of the output signal pushes the spectrum of an oscillator to frequencies around w0 . These imperfections also lead to extra spurs and harmonics at the frequencies 2w0 , 3w0 , 4w0 , etc. The instantaneous output of the oscillator can be represented by v(t) = v0 [1 + A(t)] sin [w0 t + α(t)] + vharmonics (t) + vspurs (t), (3.2) where A(t) represents the amplitude fluctuations, and α(t) refers to the phase fluctuations. Phase noise includes phase fluctuations, spurious signals (vspurs (t)), and harmonics (vharmonics (t)). Gen- erally, spurious and harmonics signals are caused by the use of non-linear active devices such as MOSFETs and BJTs. Spurious signals are also caused by external noise sources such as noise on power supply and bias currents. The phase noise of an oscillator is quantified and represented by the single-sideband (SSB) phase noise, which is defined by the ratio of power at an offset fm with 1 Hz bandwidth over the total power for a SSB. The phase noise is obtained from [90] ! 2 vn,RM (f )   Pnoise (fm ) S m L(fm ) = 10 log = 10 log 2 , (3.3) Psig vc,RM S 2 where Psig is the signal power, vc,RM S is the root mean square (RMS) amplitude of the carrier, and 2 vn,RM S (fm ) is the RMS amplitude of the signal representing the phase noise at an offset frequency fm . The phase noise in (3.3) is a result of both amplitude modulated and phase modulated noise, thus the oscillator phase noise can be expressed as   Sϕ (fm ) Sa (fm ) L(fm ) = 10 log + , (3.4) 2 2 36 frequency random walk 0 𝐿 𝑓𝑚 = ෍ 𝑏𝑖 𝑓𝑚𝑖 frequency flicker 𝑖≤−4 𝐿 𝑓𝑚 (𝑑𝐵𝑐/𝐻𝑧) White frequency 𝐿(𝑓1 ) phase flicker 𝐿(𝑓2 ) White phase 𝐴1 𝐿(𝑓3 ) 𝑏0 𝐴2 𝐿(𝑓4 ) 𝐿(𝑓5 ) 𝐴3 𝐴4 Log-log scale Frequency (Hz) Figure 3.2: General shape of the phase noise of an oscillator. The noise power associated with each phase noise type is represented by bi . From [6] © 2021 IEEE. where Sϕ (fm ) represents the double-sideband (DSB) phase noise spectral density and Sa (fm ) rep- resents the DSB amplitude noise spectral density. The contribution of phase modulated noise around the carrier dominates the contribution of amplitude modulated noise, making it the main contributor to the phase noise power. However, at far offsets, both phase modulated and amplitude modulated noises contribute equally. The spectral profile of the phase noise of an oscillator can be modeled using linear line seg- ments in dBc/Hz each with different slope, where each segment represents different type of noise as shown in Fig. 3.2. The physical sources of noise in each region derive from the following [91]: • Random walk frequency modulated (FM) noise (1/f 4 ) is usually within 10 Hz of the carrier. This type of noise is related to the physical environment, such as temperature and vibration, and it causes the frequency of the oscillator to drift slowly over time. • Flicker FM noise (1/f 3 ) is mainly related to the resonance of the oscillator. This noise is 37 usually observed for high quality oscillators where the 1/f 2 and 1/f noise components are low, and thus do not mask it. • White FM noise (1/f 2 ) is common for oscillators with passive resonators. • Flicker phase modulated (PM) noise (1/f ) is observed generally in active components such as amplifiers. Flicker PM noise is always observed in oscillators, and the 1/f corner fre- quency is typically between 500 KHz and 1 MHz for CMOS technology, and between 1 KHz and 10 KHz for bipolar transistors. Beyond these frequencies, the noise profile is dominated by the white PM noise [92]. • White PM (f 0 ) noise is present for all the active component and can be kept to low values with appropriate circuit design. The phase noise of oscillators has been characterized in the literature using multiple models [93–95], with the Leeson model [93] being one of the earliest and well known models. Leeson phase model is based on a linear time-invariant (LTI) approach, and is given by "  2 !  # F kT f0 fk L(fm ) = 10 log 1+ 1+ , (3.5) 2Psig 2QL fm fm where F is the effective noise figure of the oscillator, k is the Boltzmann constant 1.38×10−23 J/K, T is the temperature in K, f0 is the nominal frequency of the oscillator, QL is the loaded quality factor of the resonator, and fk is the flicker noise corner frequency in the phase noise, which is not always equal to the flicker noise corner frequency of the active devices. Thus, minimizing the phase noise of an oscillator can be done by tackling the three parameters F , Psig , and QL , where F needs to be minimized while Psig and QL need to be maximized. 3.1.2 General Architecture of PLLs A PLL is a control system that locks the phase and subsequently the frequency of a voltage- controlled oscillator (VCO) to the phase of a reference input signal. The output signals of a PLL are based on the phase difference between the output of the VCO and the reference signal [90, 96–98]. 38 𝐹𝑟𝑒𝑓 Phase Charge Frequency Loop Filter VCO 𝐹𝑜𝑢𝑡 Pump Detector 𝐹𝑓𝑒𝑒𝑑𝑏𝑎𝑐𝑘 Integer-N Programmable Divider Figure 3.3: Typical PLL architecture with frequency synthesizer. Fref represents the input fre- quency to the PLL, this frequency can be lower version than the input reference frequency, which can be generated using frequency dividers. Fout is the output frequency of the VCO, while the Ff eedback is a divided version of the Fout . From [6] © 2021 IEEE. Fig. 3.3 shows a simplified block diagram of a PLL with frequency synthesizer (or integer-N programmable divider). A frequency synthesizer is an electronic circuit that generates multiple frequencies out of a single reference frequency, and it is mainly used to ensure that the feedback frequency is equal to the input reference frequency. The main blocks in Fig. 3.3 are • Integer-N programmable divider: using a controller, the integer-N programmable divider functions as a frequency divider. The frequency divider is used to ensure that the frequency of the feedback signal is equal to the frequency of the input reference. In many cases the input reference signal is also fed through another programmable divider (integer-M pro- grammable divider for example) before being fed to the PLL in order to match the output frequency of the integer-N programmable divider [90]. • Phase frequency detector (PFD): Initially phase detectors (PDs) were used to compare the phase of the incoming reference signal with the phase of the VCO. A PD could be repre- sented by a mixer for analog signals and by an XOR circuit for digital signals, among other configurations. A PD puts out a signal that is proportional to the phase difference between two input signals of the same frequency. The PD produces a series of output pulses whose width is proportional to the phase difference, which are afterwards smoothed by the loop 39 filter. The problem of PDs is that they only lock to phase and not frequency. PFD is an en- hanced version of PD that locks the VCO phase and frequency to that of the incoming signal, which leads to higher sensitivity and faster frequency locking. PFDs can be built using D flip-flops or NAND gates [99]. • Charge pump (CP): the CP acts like a DC to DC converter that takes as an input the output of the PFD and gives either positive or negative voltage with high efficiency allowing the VCO to quickly lock onto the phase of the incoming signal. • Loop filter: a low-pass filter (LPF) is used in a PLL to minimize the noise. The LPF is used to filter out all the in-band noise contributed by the PFD and CP leaving only the much lower VCO noise outside the loop bandwidth. When the VCO is locked to the reference input, the desirable configuration of a PLL is called a clean-up PLL where the LPF is designed to be narrow, typically bellow 1 kHz. On the other hand, when the PLL is not locked to a reference signal, the LPF can have a wider bandwidth to allow the PLL to lock to signals with larger frequency offsets than the cut-off frequency of the clean-up PLL. • VCO: the VCO inside of a PLL tends to have a higher phase noise compared to the reference signal, at least if the reference signal is fed through a cable. Once locked, the frequency and phase of the VCO are locked to that of the primary node or reference system. Since the phase and frequency of local oscillators drift over time, regardless of how stable they are, the locked VCO and the reference oscillator both drift in frequency and phase over time, but without having any relative phase and frequency drift. Nevertheless, in the case of wireless frequency synchronization, there will always be a varying relative phase offset between the reference oscillator and the VCO, as will be shown in Chapter 4. An important feature to monitor when frequency synchronization is performed using pulsed signals is the locking time. It is necessary to know the required pulse width that enables the PLL to lock. It is possible to estimate the locking time by first obtaining the transfer function of the PLL, which can be represented as a simple negative feedback system as shown in Fig. 3.4. This system 40 𝜃𝑟𝑒𝑓 + G(s) 𝜃𝑜𝑢𝑡 + − H(s) Figure 3.4: PLL represented using a negative feedback model. gives a transfer function of the form [90]. θout (s) G(s) = T (s) = , (3.6) θref (s) 1 + G(s)H(s) where θout is the output phase of the PLL loop, θref is the phase of the reference signal, H(s) = 1/N , and G(s) is the forward transfer function. G(s) of the presented PLL architecture is ex- pressed as ICP 2πKV CO G(s) = · F (s) · , (3.7) 2π s ICP 2πKV CO where 2π is the gain of the PFD and CP, ICP is the CP current, s is the gain of the VCO, KV CO is the sensitivity of the VCO, and F (s) is the transfer function of the low-pass filter. This study is based on the model in [90], where the filter was chosen as shown in Fig. 3.5 and it had the following transfer function Vcnt sRz Cz + 1 F (s) = = , (3.8) ICP s (sRz Cz zCp + Cz + Cp ) where the resistance Rz , and the capacitance Cz and Cp represent the filter passive components that are shown in Fig. 3.5. The PLL transfer function can be expressed as a second order transfer function as follows [97] 2ζwn s + wn2 T (s) = , (3.9) s2 + 2ζwn s + wn2 where Rz Cz ζ= wn , (3.10) 2 and ICP KV CO wn = , (3.11) N (Cp + Cz ) 41 𝐼𝐶𝑃 + 𝑅𝑧 𝐶𝑝 𝐶𝑧 𝑉𝑐𝑛𝑡 − 𝐹(𝑠) Figure 3.5: A three element LPF with resistive and capacitive components. where N represents the frequency divider value. By taking the inverse Laplace of (3.9), the settling or lock time can be approximated as  p  ln tolerancef raction × 1 − ζ 2 tlock = − . (3.12) ζwn For PLL applications, tolerancef raction represents faccuracy /fjump , where faccuracy represents the acceptable error margin, and fjump represents the frequency deviation from f0 [100]. In this case, the locking time can be expressed as   faccuracy p ln × 1−ζ 2 fjump tlock = − . (3.13) ζwn While PLLs have a finite locking time to achieve a phase-locked state, especially when they are locked to pulsed signals, this time is on the order of microseconds [90,101] , which is significantly short than a typical frequency synchronization pulse and can be ensured easily. It is important to note that the phase transfer function that is derived in (3.6) is only valid when the VCO is locked to the frequency of the reference signal. 3.1.3 PLL Phase Noise Phase noise of a VCO placed inside a PLL is shaped in accordance to the PLL specifications, the reference signal, and its phase noise characteristics. In a free running configuration, when the PLL is not locking to a reference frequency, the VCO phase noise is the same as its original phase noise 42 𝜃𝑝𝑓𝑑 𝑖𝑛,𝐶𝑃 𝑣𝑛,𝑐𝑛𝑡 𝜃𝑉𝐶𝑂 𝑃𝐹𝐷 𝐶𝑃 𝐿𝑜𝑜𝑝 𝐹𝑖𝑙𝑡𝑒𝑟 𝑉𝐶𝑂 𝜃𝑟𝑒𝑓 𝐼𝐶𝑃 2𝜋𝐾𝑉𝐶𝑂 𝜃𝑜𝑢𝑡 + + + 𝐹(𝑠) + + + 2𝜋 𝑠 − 𝐷𝑖𝑣𝑖𝑑𝑒𝑟 + 1/N 𝜃𝑑𝑖𝑣 Figure 3.6: PLL noise model. as specified in the datasheet. On the other hand, when the VCO is operating inside a locked PLL, the phase noise of the VCO is referred to as PLL phase noise and is characterized by the transfer function of the PLL, which is obtained by evaluating the noise contribution from every block in the PLL. The phase noise in PLLs has been characterized using multiple models, in this section the model that was derived in [90] is of interest. The model is shown in Fig. 3.6. The term θref is √ represented in rad/ Hz and it includes the phase noise of the reference input, which combines the noise of the reference crystal oscillator, crystal buffer, and reference frequency divider. The noise √ caused by the integer-N programmable divider is represented by θdiv and is expressed in rad/ Hz. √ θV CO is the phase noise of the VCO represented in rad/ Hz. The term θP F D represents the PFD √ noise in rad/ Hz. The noise at the VCO input due to the loop filter and other coupled noise √ sources to the control line is represented by vn,cnt in V / Hz. the term in,cp represents the noise of √ the CP current expressed in A/ Hz. The total noise is calculated by adding the contribution from all the blocks in an RMS sum  2 "  2 # 2 G(s) 2 2 2 2π θout = · θref + θdiv + θpf d + in,cp · 1 + G(s)H(s) ICP  2 "  2 # (3.14) 1 2πK V CO + · θV2 CO + vn,cnt · . 1 + G(s)H(s) s 43 PLL noise PLL noise PLL noise Close-in noise Close-in noise Close-in noise Frequency (Hz) 𝑓𝐿𝐹 𝑓𝐿𝐹Frequency (Hz) 𝑓𝐿𝐹 Frequency (Hz) (a) (b) (c) Figure 3.7: PLL phase noise shape base on the selection of the loop bandwidth, where fLF repre- sents the frequency of the loop filter of the PLL. The bandwidth can be: (a) optimum, (b) large, (c) narrow. The complete derivation of the PLL phase noise can be found in [90]. As can be seen, the factor G(s) 1+G(s)H(s) acts as a low pass filter for the phase noise contribution from the reference, PFD, divider, 1 and CP. On the other hand, the factor 1+G(s)H(s) acts as a high pass filter for the VCO phase noise, and the phase noise generated by the control line. Thus within the loop bandwidth (up to the frequency fL F ), the phase noise of the VCO and control line is suppressed. Knowing that, the bandwidth of the PLL must be chosen at the point where the VCO phase noise intersects the close- in phase noise as shown in Fig. 3.7a, otherwise the behavior in Figs. 3.7b and 3.7c is observed. The close-in phase noise in dBc is represented as L0 = LP LL,nf + 20 log(N ) + 10 log(fref ), (3.15) where LP LL,nf represents the PLL noise floor due to its circuitry. A typical PLL output phase noise is represented in Fig. 3.8; the first region represents the reference phase noise, the second is dominated by L0 , and the third region is dominated by the phase noise of the VCO. 44 𝐿 𝑓 (𝑑𝐵𝑐/𝐻𝑧) VCO phase noise 𝐿0 REF phase noise 1 2 3 𝑓𝐿𝐹 Frequency (Hz) Figure 3.8: Typical PLL phase noise profile. In the first region, the phase noise of the reference signal dominates the PLL phase noise; on the other hand, the close-in noise dominates region 2 leading to an increase in the PLL noise floor; finally, region 3 is dominated mainly by the phase noise of the locked VCO. fLF , which represents the frequency of the loop filter of the PLL, needs to be selected appropriately as shown in Fig. 3.7 to minimize the phase noise of the PLL. From [6] © 2021 IEEE. Ideal clock Real clock Figure 3.9: Example of a clock jitter for the real case compared to the ideal time domain perfor- mance of a clock where no jitter is observed. 3.2 Phase Noise Interpretation of Locked and Unlocked Oscillators in Fre- quency and Time 3.2.1 Time Jitter The phase noise is a representation of the spectral impurity of an oscillator. In the time domain, this noise is represented as time jitter in seconds or phase jitter radians as shown in Fig. 3.9. The 45 RMS phase jitter in radians and the RMS jitter in seconds can be extracted from the phase noise profile of a VCO or PLL. At frequencies lower than 10 Hz (around the nominal frequency), the phase noise is dominated by the frequency drift and frequency random walk, while at frequencies higher than 10 Hz, the phase noise is shaped by the jitter. Since the clock of a system is obtained from the VCO, they both experience similar drift and jitter effects. The phase noise of a VCO can be represented as shown in Fig. 3.2, while the phase noise of a PLL is shaped similarly to Fig. 3.8. Regardless of the phase noise profile, the integrated phase noise power A is obtained by evaluating the following equation A = 10 log (A1 + · · · + An ) , (3.16) where the total area A from Figs. 3.8 or 3.2 is broken into the separate areas A1 through An . As an example, the area Ai can be calculated using the following equation Ai = 10{[L(fi )+L(fi+1 )]/2+10 log(fi+1 −fi )}/10 . (3.17) The phase noise can be integrated until any desired frequency depending on the application and the hardware; the value f0 /2 was chosen in this dissertation, which is the half value of the nominal frequency of a given oscillator. Integrating the phase noise to higher values has negligible effects with low quality oscillators since they have high noise powers far before the threshold f0 /2, however, for very stable oscillators, this extra integration bandwidth can have a significant effect on the calculated jitter, and in most of the cases it might not represent the actual observed jitter on the oscillator. The RMS phase jitter is obtained from p Phase Jitter = 2 × 10A/10 . (3.18) Similarly, the RMS time jitter is obtained from Phase Jitter Time Jitter = . (3.19) 2πf0 46 3.2.2 Allan Deviation and Frequency Drift The Allan deviation (ADEV) or Allan variance (AVAR), which is the square of ADEV, are used to evaluate the stability of the systems affected by noise processes, and are represented by σy (τ ) and σy2 (τ ). Two-sample or M-sample AVAR and ADEV are commonly used for sensor and oscillator characterization to evaluate how much they vary over a given time. M-sample AVAR and ADEV are represented by σy2 (M, T, τ ) and σy (M, T, τ ), where T represents the time between measure- ments and τ the observation time for a single measurement. Evaluation of oscillators mainly relies on the two-sample AVAR σy2 (τ ) = ⟨σy2 (2, τ, τ )⟩, where T = τ and S = 2 [102]. Nevertheless, the M-sample AVAR is obtained from  "M −1 #2  M −1 1 X 1 X  σy2 (M, T, τ ) = 2 ȳi − ȳi , (3.20) M − 1  i=1 M i=1  where ȳ is the average fractional frequency obtained from 1 τ Z ȳ(t, τ ) = y(t + tv )dtv , (3.21) τ 0 here the average is taken over observation time τ , and y(t) is the fractional-frequency error which can be obtained from f (t) − f0 y(t) = , (3.22) f0 where f (t) is the oscillator frequency and f0 is the nominal frequency. Note that ȳ(t, τ ) may also be obtained from x(t + τ ) − x(t) ȳ(t, τ ) = , (3.23) τ where x(t) is the clock value. As for our case we are interested in the two-sample AVAR, σy2 (τ ) is derived from (3.20) and is expressed as 1 σy2 (τ ) = ⟨σy2 (2, τ, τ )⟩ = ⟨(ȳn+1 − ȳn )2 ⟩. (3.24) 2 AVAR or ADEV can be deduced through transformation from a typical phase noise profile such as the one shown in Fig. 3.2, as explained in [103, 104]. The line segments representing σy2 (τ ) are shown in Fig. 3.10 and are analyzed in [105]. 47 𝜎𝑦2 (𝜏) 𝜏0 frequency flicker Log-log scale Time (𝜏) 𝜏𝑓,𝑚𝑖𝑛 Figure 3.10: AVAR profile for an oscillator. τf,min represents the instance where the AVAR is mainly dominated by the random walk FM noise. Later on, when analyzing the RMS frequency drift, it is important to select a value τf ≥ τf,min . From [6] © 2021 IEEE. The main interest in σy2 (τ ) or σy (τ ) is due to their ability to characterize the random frequency walk and frequency drift (for the remained of this document, both the random frequency walk and frequency drift are going to be referred to as frequency drift). As mentioned in Section 3.2.1, the frequency drift is evaluated using the frequencies within 10 Hz from the nominal frequency in the phase noise profile. Nevertheless, characterizing the first 10 Hz of the phase noise is almost always impossible, especially for the frequencies within few Hz from the nominal frequency (if possible, extremely expensive hardware is needed to measure the phase noise at very low frequencies). AVAR offers an alternative way to study the frequency drift such as follows: an ADEV of 10−11 for τ = 1 s represents an instability in frequency between two observations 1 s apart with a relative RMS value of 10−10 . Thus, 0.001 Hz RMS drift is observed after 1 s for a 10 MHz oscillator. This metric allows us to tune the synchronization time to allow a 0.0001 Hz RMS drift with 1 Hz update rate or lower RMS drift values with higher update rates. Even though this frequency drift value seems small, when generating a 5 GHz carrier from a similar VCO, an RMS drift of 5G 10 M × 0.001 Hz = 0.5 Hz is observed after 1 s, which could lead to a dramatic phase shift over time between the nodes in the array. Since it is possible to design the update time based on the 48 tolerable frequency drift among the nodes in the distributed array, the RMS frequency drift values can be interpolated and obtained for the update times of interest using s T RMS Frequency Drift = σy (τf )f0 , (3.25) τf where T is the update time, and τf is chosen once the random frequency walk asymptote τ becomes visible in the AVAR, ideally at τf,min in Fig. 3.10. At τf,min , the variance caused by the phase noise at frequencies higher than 10 Hz softens, and the AVAR becomes dominated by the random frequency walk and frequency drift, thus any value at or beyond τf,min is usable. AVAR or ADEV can be evaluated for a free-running VCO or locked VCO using pulsed synchronization signals, since between the synchronization pulses the frequency of the VCO drifts in comparison to the reference source. On the other hand, when a VCO is frequency locked using CW, there is no relative frequency drift between the VCO and the reference, for the case of coherent distributed arrays, this translates to no relative frequency drift between the secondary nodes and the primary node. For the case of frequency locked VCO, the frequency drift reaches its maximum closer to the beginning of the next update interval. 3.2.3 Time Drift Since the time of a clock is extracted from the cycles of the VCO, The RMS time drift is linked to the RMS frequency drift. However, the time deviation (TDEV), which represents the standard deviation of the timing errors in a clock/oscillator, is used instead od the ADEV measurements that are only used to assess the spectral purity of a VCO. Similar to the processing of AVAR, the TDEV value of interest resides at an observation time τt equal or higher than τt,min , at which the effect of the time drift dominates the effect of the time jitter. τt,min represents the time at which the slope of TDEV starts to have a positive slope. The RMS time drift at any time τ or any update time T is interpolated using r T RMS Time Drift = TDEV(τt ) . (3.26) τt 49 Similar to the frequency drift, time drift is only present if the frequency synchronization is not achieved using CW signals. In contrast to the RMS frequency drift, the RMS time drift has lower impact on the coherence of the distributed arrays, since the time drift only affects the modulation at a certain carrier fc and the tolerated timing errors are inversely proportional to the bandwidth of the modulated signal, compared to the bandwidth of the signal for the case of frequency errors. 3.3 Distributed Beamforming Analysis This section sets a framework to evaluate the effects of phase and time jitter, frequency drift, and time drift on the coherent gain Gc from (2.1). The summation of the received signals from N PN −1 nodes in the far field sr (t) = 0 sr,n (t) is modified to include the effects of instantaneous phase, frequency, and time errors that are observed in a real case scenario where frequency locking using PLLs is implemented with pulsed synchronization signals. Also, both sr (t) and the ideal P −1 summation of the received signals from N nodes in the far field si (t) = N 0 si,n (t) are modified to include any desired signal modulation when evaluating the effect of timing errors. Performance degradation due to errors in the calibration of propagation channel, frequency, time, and phase can be studied separately and evaluated as compounded errors such as in Chapter 2, where the localization errors were investigated separately. Two types of transmissions are considered to evaluate the coherent gain in terms of the noise parameters in Section 3.2: first case is represented by the transmission of a CW (from the coherent distributed array) where the effects of phase jitter, phase shift due to frequency shift, and frequency shift are observed. This scenario is evaluated with both CW synchronization signals and pulsed synchronization signals. The second case is represented by the transmission of phase modulated signals where the effects of time jitter and time drift are added to the received signals that are observed in the first case. The ideal and non-ideal received signals from N nodes in the far field are denoted with si,k (t) and sr,k (t), where k is set to 1 for the case of CW transmission by the array and it is set to 2 for the case of modulated-signal transmission. For the first case we have   fc,n j (2πtf0 ) si,1,n (t) = e f0 . (3.27) 50 The non-ideal received signals are represented by   fc,n sr,1,n (t) = e j f0 (2πt[f0 +δfn ]+[ϕ0,n +δϕn ]) , (3.28) where fc,n represents the carrier frequency at a node n, the frequency drift δfn for the nth oscillator is taken from a normal distribution with a standard deviation equal to the RMS value from (3.25). Similarly, δϕ represents the phase jitter, taken from a normal distribution with a standard deviation equal to the RMS value in (3.18). After every update time, ϕ0 is added to compensate for the extra phase shift obtained from considering frequency drift equal throughout the transmission. The frequency drift at a time t leads to a frequency offset δfn that varies over time to reach a certain final value before the next frequency/phase update as shown in (3.25). It is not possible to track the exact frequency shift over time, since for every synchronization cycle, the frequency drifts differently. Nevertheless, the average change of frequency of a certain interval can be approximated with a linear change with a slope t/T , this averaging of the frequency change over an interval T represents the average case. The phase at a time T with a linear frequency/phase shift is given by Z T t ϕactual = 2π f0 + δfn dt. (3.29) 0 T The total phase error at time T is then Z T ϕ0 = ϕactual − 2π f0 + δfn dt. (3.30) 0 As for the second scenario, the transmitted signal is represented by a BPSK signal [106] where a modulation/phase mismatch results in nulling the beamformed signals. The bandwidth of the sig- nal was selected depending on the carrier frequency, and in general a high bandwidth was assigned to the modulation in order to account for the worst case scenario while evaluating the coherent gain. the ideal signal is given by   fc (m) si,2 (t, n) = cos (2πtf0 ) + π(1 − b(t)) , (3.31) f0 where b(t) = 0 or 1, representing a phase shift of 180◦ or 0◦ . The non-ideal received signals sr,2,n are represented by   fc (m) sr,2 (t, n) = cos (2πt (f0 + δfn ) + ϕ0,n + δϕn ) + π(1 − b(t + δtn )) , (3.32) f0 51 where b(t + δtn ) is the desired bit for the time t + δtn (the bit cannot fluctuate between 0 and 1 if the time jitters back an forth, once it is assigned to the next bit, it cannot be reassigned to the previous one). δtn represents both the time drift and jitter and is taken from a normal distribution with a standard deviation equal to the RMS value in (3.19). An offset equal to the random time drift is added to the normal distribution; this offset was obtained from a normal distribution with standard deviation equal to RMS time drift obtained from (3.26). 3.4 Practical Examples In this section I use the above framework to evaluate the impact on coherent beamforming gain in a distributed phased array in the presence of oscillator phase noise in multiple examples. I consider an array of multiple scattered nodes and use a threshold performance metric of Gc ≥ 0.9, indicating that the distributed beamforming operation achieves 90% of the ideal beamforming gain (a degradation of less than 0.5 dB). Array sizes N = 2, 10, 20, and 100 are considered. Performance bounds on the mainbeam gain at frequencies extending from the microwave to the millimeter- wave bands, using two nominal 100 MHz oscillator phase noise profiles are considered. First, the coherent gain is analyze in the presence of the reference signal being transmitted continuously without interruptions. Next, the coherent gain is evaluated for the case where the reference signal is transmitted using pulsed waveforms with a varying update time T . The reference signal is assumed to be broadcast wirelessly from an arbitrary location within the array. Each node receives the reference signal and inputs it to its PLL. As noted above, the locking time for a typical PLL is on the order of microseconds; it is reasonable to assume that the pulsed reference signal is of a sufficiently longer duration to support locking. The two oscillator phase noise profiles that are shown in Table 3.1 are considered; VCO A represents a typical VCO, while VCO B represents a low phase noise VCO. The phase noise data, ADEV and TDEV used in this work are similar to what is available in the market or in the literature such as in [107–110]. © 2021 IEEE. Section 3.4 is reprinted with minor modifications, with permission, from Section V of "S. R. Mghabghab, and J. A. Nanzer "Impact of VCO and PLL Phase Noise on Distributed Beamforming Arrays with Peri- odic Synchronization," IEEE Access, vol. 9, pp. 56578-56588, 2021". 52 Table 3.1: SSB phase noise of the selected 100 MHz oscillators. VCO A VCO B f1 = 10 Hz -60 dBc/Hz -110 dBc/Hz f2 = 100 Hz -90 dBc/Hz -140 dBc/Hz f3 = 1 kHz -120 dBc/Hz -165 dBc/Hz f4 = 10 kHz -145 dBc/Hz -185 dBc/Hz f0 /2 = 50 MHz -155 dBc/Hz -190 dBc/Hz The phase noise of the VCO inside of a PLL has two possible phase noise behaviors depending on the locking status of the PLL. When the PLL is not receiving the frequency locking signals, the VCO is in a free-run mode, and the PLL output phase noise will correspond solely to that of the VCO. On the other hand, when the PLL is frequency locked, its phase noise will have a shape similar to that in Fig. 3.8. In this section the coherent gain is evaluated in a coherent distributed array for three architectures: 1) the reference signal generated using VCO A (the high-phase noise oscillator) and the secondary nodes are equipped with the same VCO A; 2) the reference signal generated using VCO B (the low-phase noise oscillator) and the secondary nodes are equipped with the same VCO B; and 3) the reference signal generated using VCO B from an external trans- mitter and the secondary nodes are equipped with VCO A. The case where the reference signal is generated using VCO A from an external transmitter and the secondary nodes are equipped with VCO B was not considered since in this case the phase noise of the reference VCO will be much higher than the phase noise of the VCOs used by the secondary nodes, which leads to an overall system performance similar to the first architecture. Two assumptions are considered in this section; first, it is assumed that the phase noise profile of the input reference signal matches that of the reference oscillator. In practice, the phase noise is usually higher for the input reference signal since a large path loss might be present between the primary and secondary nodes which would lead to a decrease in the received signal SNR on the secondary node. This decrease in signal power then may lead to an increase in white phase noise. In addition to that, interference might be present which would increase the phase noise. In 53 (a) (b) (c) Figure 3.11: Phase noise of the locked PLLs for architectures: (a) 1, (b) 2, and (c) 3. In architecture 1, VCO A was being locked by the PLLs of the secondary nodes and it was used to generate the reference signals. In architecture 2, VCO A was replaced by VCO B. In architecture 3, VCO B was used to generate the reference signals, while VCO A was being locked by the PLLs of the secondary nodes. In all the architectures, the loop filter was assumed to be designed appropriately to minimize the phase noise of the PLLs. order to deal with this assumption in practice, it is possible to estimate the phase noise of the input reference signal for the worst-case scenario to estimate a bound on beamforming performance. Second, I consider the case where no frequency multiplication or division is needed (which is the case when the frequency of the input signal is equal to the nominal frequency of the oscillator). In this case the phase noise of the input reference signal is equal to the phase noise of the input signal to the PLL. If the reference frequency requires multiplication/division by a factor M , the phase noise of input reference increases/decreases by a factor of 20 log(M ). In the case where the same VCO is used as the reference and PLL VCO, the phase noise of the continuously locked PLL will resemble the phase noise of the VCO, since region 1 in Fig. 3.8 will overlap region 3. This means that the phase noise characteristics of the VCOs can be considered as the phase noise characteristics at the output of the PLL for architectures 1 and 2. For the third architecture, the phase noise at the output of the locked PLL will resemble that of Fig. 3.8. The close-in phase noise of the PLL was taken to be -130 dB/Hz until the frequency of the loop filter (matching typical PLL values), with a rapid decrease in magnitude beyond that point. In all the architectures, the loop filter was assumed to be appropriately selected to match the profile in Fig. 3.8. Based on the values in Table 3.1 and the approximated close-in phase noise of the PLL, the phase noise of the PLL for the three selected architectures is shown in Fig. 3.11. 54 Figure 3.12: Coherent gain for various carrier frequencies for arrays of sizes N = 2, 10, 20, and 100 with continuous locking for architecture 1 (using the high phase noise oscillator as the reference and PLL VCO) with the phase noise profile from Fig. 3.11a. For VCO A, the RMS of the phase jitter is equal to 2.7 × 10−3 rad and the RMS of the time jitter is equal to 4.37 × 10−12 s, calculated from (3.18) and (3.19), respectively. For VCO B, the RMS of the phase jitter is equal to 9.95 × 10−6 rad and the RMS of the time jitter is equal to 1.58 × 10−14 s. For the third architecture, when the nodes are continuously locked, the RMS of the phase jitter is equal to 1.85 × 10−4 rad and the RMS of the time jitter is equal to 2.95 × 10−13 s; in the intervals where the VCOs are not frequency locked, their RMS phase jitter and RMS time jitter are equal to that of their internal VCOs (VCO A in this case). The ADEV for VCO A at τf = 1 s is σy (1) = 10−10 , while for VCO B at τf = 1 s it is σy (1) = 2.3 × 10−12 . The selected TDEV at τt = 1 s for VCO A is TDEV(1) = 8 × 10−11 s, while for VCO B at τt = 1 s it is TDEV(1) = 2 × 10−12 s. τf and τt values were inspired by datasheets of available VCOs in the market such as in [107]; a common practice is to show in the datasheets either the entire ADEV data of the VCO or to report τf at 1 s since this value is usually close to τf,min ; similar practice is 55 Figure 3.13: Coherent gain for various carrier frequencies for arrays of sizes N = 2, 10, 20, and 100 with continuous locking for architecture 2 (using the low phase noise oscillator as the reference and PLL VCO) with the phase noise profile from Fig. 3.11b. done in regards to TDEV. Whenever the entire data for ADEV and TDEV is present, it is preferable to select the values of τf and τt at τf,min and τt,min since these values tend to capture the worst case scenario. 3.4.1 Beamforming Performance with Continuous Synchronization When the PLLs of the secondary nodes receive the reference signal continuously without interrup- tion, there are no relative frequency or time drifts, and furthermore the effects of timing errors are negligible (many orders of magnitude less than frequency errors). Thus, the signals can be mod- eled as in (3.28) with δfn = 0. The principal performance parameter is thus the carrier frequencies fc that are supported by the chosen VCOs for distributed beamforming. Generally, with contin- uous locking, even oscillators with moderate to poor phase noise can support good distributed beamforming performance up to millimeter-wave frequencies. The coherent gain was plotted for 56 Figure 3.14: Coherent gain for various carrier frequencies for arrays of sizes N = 2, 10, 20, and 100 with continuous locking for architecture 3 (using the low phase noise oscillator as the reference and the high phase noise oscillator as the PLL VCO) with the phase noise profile from Fig. 3.11c. the three architectures, multiple array sizes, and multiple fc values; one thousand Monte Carlo simulations were run for carrier frequencies up to 50 GHz for architecture 1, as for the second ar- chitecture, frequencies up to 3 THz were considered, and for the third architecture, frequencies up to 1 THz were evaluated. The results for architecture 1 are shown in Fig. 3.12. As it can be seen, when two nodes are used, at least a coherent gain of 0.9 can be achieved for frequencies below 17 GHz. As for the other array sizes, at least 0.9 of coherent gain is achievable for frequencies below 12 GHz. The results for architecture 2 are shown in Fig. 3.13. For all the array sizes that were used, much higher than 0.9 of coherent gain can be achieved for frequencies higher than 3 THz. This frequency is very high for coherent distributed arrays, and in practice it is not feasible since there will be many other sources leading to phase errors. Nevertheless, this shows that the phase noise in the selected oscillator will not be a limiting factor for coherent transmission even at very high carrier frequencies. Regarding the third architecture, the results are shown in Fig. 3.14. 57 Figure 3.15: Coherent gain for CW and BPSK signals with periodic synchronization, where the jitter and frequency drifts (CW and BPSK), and timing errors (BPSK) were considered for nodes equipped with VCO A. Array size N = 100 was selected. When two nodes are used, at least 0.9 of coherent gain can be achieved for frequencies bellow 252 GHz. As for the other array sizes, at least 0.9 of coherent gain is achievable for frequencies below 160 GHz. Clearly, continuous synchronization provides good distributed beamforming per- formance; if a low phase noise oscillator is used as the reference, good performance well beyond 100 GHz is attainable. 3.4.2 Beamforming Performance with Periodic Synchronization with Phase and Frequency Errors Practically, continuous synchronization is challenging to obtain in distributed beamforming. Here we analyze the impacts of periodic synchronization on beamforming performance. Due to the inherent drift of the oscillators, periodic synchronization is necessary, even with low phase noise oscillators. This section analyzes the phase and frequency errors without considering the time 58 Figure 3.16: Coherent gain for CW and BPSK signals with periodic synchronization, where the jitter and frequency drifts (CW and BPSK), and timing errors (BPSK) were considered for nodes equipped with VCO B. Array size N = 100 was selected. jitter and time drift, an approach supporting CW beamforming without modulation; the transmitted signals are thus modeled as in (3.28). The coherent gain was analyzed for update times T between 100 ms and 100 s. 1000 Monte Carlo simulations were generated for the carrier frequencies of 1, 5, and 10 GHz, and the coherent gain was calculated for arrays with 100 elements (larger arrays generally yield more stringent requirements, as seen in Section 3.4.1). The coherent gain was calculated for the last 100 wavelengths before the next update time to evaluate the worst case scenario, since the last few wavelengths represent the signal with the maximum drift before the next update. The phase noise profile for architectures 1 and 3 was obtained from the phase noise of VCO A, since the VCOs are free running, and the phase noise in architecture 2 was obtained from the phase noise of VCO B. The results are shown in Fig. 3.15 for architectures 1 and 3, and in Fig. 3.16 for architecture 2. When the high phase noise oscillator is used in a free-run mode or in the locked PLL, the jitter causes appreciable coherent gain degradation at higher frequencies, 59 Figure 3.17: Carrier frequencies supporting a coherent gain of 0.9 for periodic synchronization with array sizes N = 2 and 100, where the jitter, frequency drifts and timing errors were considered for nodes equipped with either VCO A or B. indicated by the fact that the curves never reach the ideal value of Gc = 1. Furthermore, update frequencies above 1 Hz are necessary to achieve reasonable coherent gain values. When the low phase noise oscillator is used as both reference and PLL VCO, the gain achieves the ideal value for relatively low update frequencies extending below 1 Hz. 3.4.3 Beamforming Performance with Periodic Synchronization with Phase, Frequency, and Timing Errors In this section we consider modulated waveforms where timing is relevant, and perform the same analysis as in Section 3.4.2 with the addition of the timing errors, thus using (3.32) for the signal model. The BPSK signal was transmitted at bit rates relative to the transmitted carrier frequencies; for carrier frequencies of 1, 5, and 10 GHz, the bit rates were 100, 500, and 1000 Mbit/s. High bit rates were chosen to capture the effect of the timing errors in the worst case scenarios. Results for 60 N = 100 are shown in Figs. 3.15 and 3.16 for the three architectures as explained in Section 3.4.2. It can be seen that the timing errors have a small impact on the coherent gain, even for very high modulation rates. This result is expected since the timing requirements for a coherent operation are relative to the symbol rate, whereas the requirements for frequency and phase synchronization are relative to the carrier frequency. The degradation in coherent gain is more visible for VCO A since it had high TDEV in comparison to VCO B. It can be seen that in Fig. 3.15 the initial coherent gain for the 5 and 10 GHz has dropped for the BPSK signals in comparison to the CW signals. Also, the decrease in coherent gain started at a slightly reduced update time for the BPSK transmission. Timing errors had a smaller effect in Fig. 3.16 due to the better stability of the oscillator. The maximum carrier frequencies that can be achieved with a 0.9 Coherent gain where the jitter, frequency drifts, and timing errors are considered for VCOs A (representing architectures 1 and 3) and B (representing architecture 2) with N = 2 and 100 in Fig. 3.17, demonstrating that the low phase noise oscillator generally supports higher operating frequencies with longer update intervals. 61 CHAPTER 4 WIRELESS FREQUENCY SYNCHRONIZATION FOR CENTRALIZED SYSTEMS The importance of wireless frequency synchronization for coherent distributed array was discussed in Chapter 1. Chapter 3 investigated the performance of distributed phased arrays that rely on PLLs for wireless frequency synchronization. It was shown that wireless frequency synchronization using CW signals produces the best outcome, as it prevents the frequency of the oscillators on every node from drifting. Throughout this dissertation, wireless frequency synchronization was always achieved using CW synchronization signals, and the secondary nodes were frequency locked using the internal PLLs on the SDRs. In addition, the implemented frequency synchronization circuit can be used to lock the frequencies of multiple SDRs since every node can be formed by more than one SDR. 4.1 Frequency Synchronization Circuit I focus in this work on centralized systems where the secondary nodes lock their electrical states to the electrical states of the primary node. In regards to wireless frequency synchronization, all the secondary nodes are equipped with an adjunct self-mixing circuit similar to the circuit in Fig. 4.1. Figure 4.1 represents the wireless frequency synchronization circuit that was used for the outdoor cooperative ranging experiment in Chapter 5; in the indoor experiments, a modified version of this circuit was used, where one of the LNAs was removed to accommodate for the increased power at the reception. The block diagram of the self-mixing circuit is shown in Fig. 4.2. The self- mixing circuit receives a two-tone CW synchronization signal with 10 MHz tone separation from the primary node, and then demodulates the tone separation to obtain a 10 MHz signal, which is the required frequency to lock the internal PLLs of the Ettus USRP X310 SDRs. For indoor application, the first two LNAs in Fig. 4.2 were chosen as Mini-Circuits ZX60- 83LN12+. In the outdoor experiment in Chapter 5, the second LNA was replaced with Mini- Circuits ZRL-1150LN+. These LNAs are used to boost the input power, and improve the SNR 62 of the received signals. ZX60-83LN12+ has an operational frequency from 0.5 GHz to 8 GHz, a gain ranging from 18 dB to 22.1 dB, and a noise figure (NF) between 1.6 dB to 2.3 dB. ZRL- 1150LN+ has an operational frequency from 0.65 GHz to 1.4 GHz, a gain ranging from 32 dB to 34 dB, and a noise figure (NF) ranging from 0.8 dB to 1.1 dB. Afterwards, a bandpass cavity filter is used to suppress all the unwanted frequencies. Two bandpass filters were used, depending on the selected bands for wireless frequency synchronization: Fairview Microwave FMFL000 which operates from 902 MHz to 928 MHz, and Mini-Circuits ZVBP-4300+ which operates from 4.25 GHz to 4.35 GHz. The power splitter Mini-Circuits ZX10-2-852-S+, which has an operational frequency from 0.5 GHz to 8.5 GHz, was used to split the frequency synchronization signals and feed the two outputs to the mixer inputs. The RF input of the mixer is passed through a 10 dB attenuator to prevent the mixer from saturating, and to suppress the non-nonlinearities at the output of the mixer. Either one of ZX05-73L+ or ZX05-43-S+ (both from Mini-Circuits) was used as mixer with 6.2 dB and 6.1 dB conversion loss respectively. The mixer was used to mix the input frequencies together so that the 10 MHz signal can be extracted. The lowpass filter Mini-Circuits BLP-10.7+, which has an 11 MHz cuttoff frequency and 0.65 dB insertion loss at 10 MHz, was used to filter the harmonics and unwanted frequencies obtained from the output of the mixer. At the end, two Mini-Circuits ZFL-500HLN+ LNAs were used to boost the output power of the reference inputs up to 15 dBm and a minimum of 0 dBm, which is the desired power range for the REF IN input on the SDRs. The last two LNAs have an operational frequency from 10 MHz to 500 MHz, a 19 dB gain, and 3.8 dB NF. A comparison between the frequencies of both, the self-mixing circuit and the clock output of the primary SDR was obtained using the setup in Fig. 4.3. In this test, the circuit was simplified and it did not include any attenuator or a bandpass filter, in addition, the last two LNAs were replaced with a generic 30 dB LNA. In this experiment, the carrier frequency was set to 2.5 GHz with two CW sidebands at ±5 MHz. The results are shown in Fig. 4.4. The additional test that is depicted in Fig. 4.5 was conducted. The aim of this experiment is to compare the 10 MHz frequency of the primary SDR to that of the internal oscillator frequencies for 63 Input Mixer Attenuator Bandpass Filter Splitter LNA LNAs Ranging Splitter LNA Bandpass Filter 10 MHz Figure 4.1: The adjunct self-mixing circuit that was used by the secondary nodes to lock their frequencies to that of the primary node. This circuit receives two-tone signals with 10 MHz tone separation and outputs a 10 MHz signal that can be fed to the input of the PLL (REF IN). A splitter was used at the input to split the received signals into ranging and frequency synchronization signals. From [7] © 2021 IEEE. Attenuator LNAs RF Lowpass Filter LNAs 10 dB IF REF IN 10 MHz Bandpass Filter Splitter LO Mixer DC - 11 MHz 𝑓1,𝑟𝑒𝑓 𝑓2,𝑟𝑒𝑓 10 MHz Figure 4.2: Block diagram of the adjunct self-mixing circuit for wireless frequency synchroniza- tion (adapted from [8]). the case of both frequency locked and frequency unlocked SDRs. The reference output frequencies were captured and compared, and the results are shown in Fig. 4.6. As can be seen, the 10 MHz reference output of the primary SDR (containing the reference oscillator) matches exactly the 10 MHz signal from the frequency locked SDR. As for the frequency unlocked SDR, it can be seen that the actual frequency of the 10 MHz signal is slightly shifted. Two extra lobes are present 64 Dig. Scope Primary SDR Frequency 10 MHz 10 MHz REF IN TX1 Locking USRP X310 Circuit REF OUT TX2 𝑓1,𝑟𝑒𝑓 𝑓2,𝑟𝑒𝑓 Figure 4.3: Block diagram of the experimental setup used for testing the spectrum at the output of the frequency locking circuit. Figure 4.4: Power spectral density of both the reference clock, and the output of the proposed frequency locking circuit. From [9] © 2021 IEEE. in the spectrum for the frequency unlocked case, this is caused by the behavior of the SDR, which was trying to lock its frequency to any available 10 MHz reference input. 4.2 Output of the Frequency Synchronization Circuit As shown in [8], the transmitted two-tone CW frequency synchronization signals from the primary node can be represented by A [sin (2πf1,ref t) + sin (2πf2,ref t)] , (4.1) 65 Frequency Locked SDR TX1 REF IN USRP X310 10 MHz TX2 REF OUT Primary SDR Frequency REF IN TX1 Locking USRP X310 Circuit REF OUT TX2 𝑓1,𝑟𝑒𝑓 𝑓2,𝑟𝑒𝑓 Frequency Unlocked SDR 10 MHz TX1 REF IN USRP X310 10 MHz Dig. Scope TX2 REF OUT 10 MHz Figure 4.5: Block diagram of the experimental setup used for testing the CW 10 MHz output signals from both frequency locked and frequency unlocked SDRs. Figure 4.6: Power spectral density of the reference clock, the locked clock, and the unlocked one. From [9] © 2021 IEEE. where A is the signal amplitude and the frequencies f1,ref and f2,ref are given by fref f1,ref = fc,ref − , (4.2) 2 fref f1,ref = fc,ref + , (4.3) 2 where fc,ref the center frequency of the dual-tone signal, and fref is the frequency reference, which for the Ettus USRP X310 SDRs is equal to 10 MHz. The normalized LO and RF inputs to the mixer 66 can be given by VRF = sin (2πf1,ref t + ϕ1 ) + sin (2πf2,ref t + ϕ2 ) , (4.4) VLO = sin (2πf1,ref t + ϕ3 ) + sin (2πf2,ref t + ϕ4 ) . (4.5) The phases ϕ1 and ϕ2 are obtained based on the distance separating the primary and secondary nodes. Since the signal input to the LO is obtained by splitting the received signal, the phases ϕ3 and ϕ4 are derived from ϕ3 = ϕ1 + c1 and ϕ4 = ϕ2 + c2 , where c1 and c2 are obtained from 2πf1,ref (dLO − dRF ) c1 = , (4.6) c 2πf2,ref (dLO − dRF ) c2 = , (4.7) c where the lengths of the cables connecting the splitter to the LO and RF inputs are represented by dLO and dRF respectively. The mixer output VIF contains the two input frequencies f1,ref and f2,ref , along with the sum and difference of the frequencies of the signals present at the LO and RF inputs [111], and other harmonics. Knowing that, the two tones of the input signal are selected with 10 MHz tone sep- aration, so that a 10 MHz frequency can be observed at the output of the mixer, among other frequencies. Once the output of the mixer is passed though the lowpass filter, the only remaining or dominant frequency should be the 10 MHz frequency. The output signal for the self-mixing circuit is represented by 1 VIF = [cos (2π (f2,ref − f1,ref ) t + ϕ5 ) + cos (2π (f2,ref − f1,ref ) t + ϕ6 )], (4.8) 2 where ϕ5 = ϕ2 − ϕ1 − c1 , (4.9) ϕ6 = ϕ2 − ϕ1 + c2 . (4.10) The output consists of two sinusoids, the frequency of each being the difference between the two input frequencies, where f2,ref − f1,ref = fref . Because there are two sinusoids, it is desired that the relative phases be adjusted so that the signals coherently sum to maximize the output power and 67 Figure 4.7: The effect of mismatch between dLO and dRF on the 10 MHz output from the mixer. sufficiently drive the PLL. This can be easily ensured by choosing dLO = dRF , and as a result the phase constants ϕ5 and ϕ6 become equal. However, the mismatch between dLO and dRF needs to be very large in order to have a considerable effect on the output power of the mixer. The amplitude of the 10 MHz output versus the mismatch between dLO and dRF is shown in Fig. 4.7. 4.3 Impact of Doppler Frequency When the secondary nodes are locking their frequencies to that of the primary node, any relative motion between a primary and a secondary node will result in a frequency shift at the output of the adjunct self-mixing circuit due to Doppler effects. When the relative velocity between the primary and a secondary node is ∆v = vt − vr , where vt is the velocity of the transmitter and vr is the velocity of the receiver, any frequency received at the receiver of the secondary node is scaled as follows     c ∆v f = fc,ref ≈ fc,ref 1+ . (4.11) c − ∆v c This approximation holds as long as ∆v ≪ c. The Doppler frequency shift can be computed using fc,ref ∆v ∆f = . (4.12) c 68 For the presented frequency synchronization approach, a dual-tone signal is being transmitted, thus each of the two transmitted frequencies f1,ref and f2,ref will have a different Doppler shift. Using (4.12), fref (fc,ref − 2 )∆v ∆f1,ref = (4.13) c fref (fc,ref + 2 )∆v ∆f2,ref = . (4.14) c From (4.8), (4.13), and (4.14), the output of the frequency locking circuit is then 1 VIF = cos{2π[(f2,ref + ∆f2,ref ) − (f1,ref + ∆f1,ref )]t + ϕ5 } 2 1 + cos{2π[(f2,ref + ∆f2,ref ) − (f1,ref + ∆f1,ref )]t + ϕ6 }. (4.15) 2 The two sinusoids in the mixer output signal reside at a frequency fref + fd , where the relative Doppler shift fd is given by fref ∆v fd = ∆f2,ref − ∆f1,ref = . (4.16) c This result shows that the frequency shift is only affected by the Doppler shift relative to the mod- ulation frequency, which in our case is 10 MHz. Thus, the carrier used for wireless frequency synchronization does not have any effect on the observed Doppler Shift. This result is intuitive since the Doppler shift will affect both tones with a slight difference depending on the tone sep- aration. However, when the array is beamforming in the far field using a carrier fc , the Doppler shift that is observed at the 10 MHz reference input, is scaled by the multipliers used to transform the 10 MHz to fc . The Doppler shift on the transmitted carrier can be computed from (4.12) by replacing fc,ref with fc . The Doppler shift for 10 MHz, 1 GHz, and 10 GHz signals is evaluated for multiple relative velocities between a primary and a secondary node as shown in Fig. 4.8. As can be seen, the Doppler shift is larger at high frequencies. However, if the frequency synchroniza- tion is achieved between relatively static nodes, such as in a drone swarm, the relative velocities between the nodes is expected to be low, leading to low Doppler shifts at a given instance. 69 Figure 4.8: The observed Doppler shift for 10 MHz, 1 GHz, and 10 GHz signals in the case where the relative velocity between a primary and a secondary node is varied from 0 to 10 m/s. 4.4 Phase Change Due To Frequency Synchronization Signal Propagation Coherent transmission of signals from a distributed array is only possible if time, frequency, and phase are synchronized. Thus it is important to analyze the phase shift caused by wireless fre- quency synchronization on every node. Since internal PLLs lock the frequency of the SDR to the reference frequency by comparing the oscillator phase to the phase of the incoming signal, the change in relative distance between the primary node and any secondary node need to be monitored so that the phase shift caused by the wireless frequency synchronization process can be tracked. Changes in distance between the antennas performing the frequency synchronization operation lead to a phase shift that can be estimated using 2π∆dIN ∆ϕ = (4.17) λc where ∆dIN is the change in inter-node distance between the primary node and the secondary node of interest. In the case where the two-tone CW frequency synchronization signal is being transmitted, the phase shift at the output of the self-mixing circuit is obtained from   f2,ref · ∆dIN f1,ref · ∆dIN 2πfref · ∆dIN ∆ϕref = −2π − =− . (4.18) c c c 70 To observe the resulting phase shit at the transmitted carrier of the secondary node, one must observe how the carrier is generated from the local oscillator. If we assume that the frequency of the local oscillator is 10 MHz, a carrier frequency of 100 MHz is generated by multiplying the 10 MHz by 10. A similar multiplicative effect is observed for the phase shift at the carrier, and it can be computed from fc ∆ϕc1 = ∆ϕref . (4.19) fref This result shows that the two frequencies f1,ref and f2,ref have no effect on the observed phase shift at fc , where the phase shift is only observed at the modulated frequency fref . Thus, knowing fc and dIN is enough to estimate the phase shift at the carrier. To verify that the phase shift observed at a carrier follows (4.19) the experiment shown in Fig. 4.9 was conducted. Since the Ettus USRP X310 SDRs were used, fref was set to 10 MHz (λref = 30 m). The two tones were transmitted at f1,ref = 3.995 GHz and f2,ref = 4.005 GHz, where fc,ref = 4 GHz. The phase shift was observed at fc = 100 MHz (λc = 3 m). Using (4.19), the anticipated phase shift was calculated as follows, ∆ϕc1 = −10∆ϕref . Throughout the experiment, the receiver of the secondary node was displaced by 3 m in total, and the phase shift was recorded with 2.5 cm increments using an oscilloscope (initial phase calibration was done). The results are shown in Fig. 4.10, where the change in phase shift with a given displacement tracked well with the expected phase shift. 71 Master Node Slave Node Dig. Scope Moving REF IN TX1 TX1 REF IN Node USRP X310 USRP X310 𝑓𝑐 𝑓𝑐 REF OUT TX2 TX2 REF OUT 10 MHz Frequency Locking Circuit 𝑓1 𝑓2 𝑓1 𝑓2 Figure 4.9: Block diagram of the setup used to measure the phase shift produced by the frequency synchronization circuit when the antenna receiving the two-tone signal is displaced. From [8] © 2021 IEEE. Figure 4.10: Experimental results showing the phase shifts between the primary and secondary nodes signals for multiple displacement distances. From [8] © 2021 IEEE. 72 CHAPTER 5 ADAPTIVE RANGING THAT SUPPORTS COHERENT BEAMFORMING OPERATIONS I focus in this chapter on ranging using two-tone waveforms, which offer near-optimal ranging accuracies. The CRLB is evaluated for two-tone and other multi-tone waveforms that build on the high accuracy of two-tone waveforms and offer added functionalities. An adaptive ranging frame- work that is based on the proportional–integral–derivative (PID) controller and the high accuracy two-tone ranging waveform is demonstrated in this chapter. Adaptive ranging algorithms can offer high ranging accuracies with low spectral footprint, in particular they can maintain desired rang- ing accuracies that support accurate phase alignment and coherent beamforming as explained in Chapter 2 while employing ranging waveforms with the minimal desired bandwidth. The adaptive ranging approach was tested in indoor and outdoor settings. 5.1 CRLB of Range Estimates Ranging radars operate by transmitting either CW or pulsed signals. For CW ranging signals, the phase or frequency difference between the transmitted and received pulses is used to determine the time delay of the received pulses. Whereas for pulsed ranging signals, which are the focus of this work, generally the time of arrival of the received pulse is used to estimate the time delay. The target range is determined from the time-delay estimates using the formula cτ R= , (5.1) 2 where c = 3 × 108 m/s refers to the speed of light, and τ is the time-delay estimate. In practice, time-delay estimates are never exact, and the accuracy of the time delay estimator can be evaluated using RMSE. In the case where the target has an unknown distance, which is the case in practice, the standard deviation is evaluated instead of RMSE. Whether RMSE, standard deviation, or variance are calculated, they can be all compared to the CRLB, which represents the best achievable accuracy for an unbiased estimator. As the CRLB states, the minimum achievable 73 variance for an unbiased estimator is inversely proportional to the Fisher Information [112]. The Fisher Information indicates the amount of information that can be extracted from a signal. How- ever, the CRLB only defines the minimum achievable variance for a certain ranging waveform with a given SNR, without being able to provide a method that can achieve this theoretical bound. Radar signals can be expressed as [75] sR (x) = αs (x; u) + w(x), (5.2) where x in this section represents the time at which the signal was received, α is a complex pa- rameter that represents the amplitude and phase fluctuations due to environment, beam pattern, multi-path, etc., s (x; u) is a function of the unobservable parameters u that need to be estimated, which include the time delay, velocity of the target, etc., and w(x) is additive Gaussian white noise (AWGN). Since this chapter focuses on time-delay estimation (ranging), (5.2) can be expressed as sR (t) = αg (t − τ ) + w(t). (5.3) In [113], it was shown that for a deterministic α and unknown u, the CRLB can be determined from !−1 2 ∂s (x; u)∗ Z 2 Z N0 ∂s (x; u) 1 σ 2 (û − u) ≥ dx − s (x; u) dx , (5.4) 2 |α|2 ∂u E ∂u |s (x; u)|2 dx, N0 is the noise per unit bandwidth. The first integral R where the signal energy E = in (5.4) is often referred to as the mean-square bandwidth or the second moment of the energy spectrum, and it is represented by ζf2 . ζf2 is evaluated using Plancherel’s theorem as follows Z 2 ∂s (x; u) ζf2 = dx ∂u Z 2 ∂ = g(t − τ ) dt ∂τ Z 2 (5.5) ∂ = G(f )e−j2πf τ df ∂τ Z = (2πf )2 |G(f )|2 df, 74 - δf 0 δf f Figure 5.1: The ideal spectrum of the two-tone waveforms expressed in (5.10). where G(f ) is the Fourier transform of g(t). Similarly, the second integral, which represents the first moment of the energy spectrum or the mean frequency, is evaluated as follows 2 ∂s (x; u)∗ Z µ2f = s (x; u) dx ∂u Z  ∗ 2 ∂ = g(t − τ ) g(t − τ )dt (5.6) ∂τ Z 2 2 = 2πf |G(f )| df . The term µ2f can be ignored in the case where the transmitted/received signal shows a symmetry in the spectrum domain i.e. |G(f )| = |G(−f )|. The lower bound for the variance of time-delay estimates is obtained from (5.4), which can be expressed in terms of ζf2 and µ2f , N0 var(τ̂ − τ ) ≥ 2 . (5.7) 2 |α| ζf2 − E1 µ2f This lower bound for the time-delay estimates is translated to a lower bound for range estimates using (5.3), where c2 var(R̂ − R) = var(τ̂ − τ ). (5.8) 4 2|α|2 E Furthermore, the post-processing SNR (SNR) can be represented in (5.7) by SNR = N0 ; 2E note that in some formulations in the literature, α is omitted leading to SNR = N0 . Thus, (5.7) can be expressed as follows 1 var(τ̂ − τ ) ≥  ζ2 µ2f . (5.9) f SNR E − E2 75 5.1.1 CRLB of Typical Two-Tone Waveforms The CRLB of a ranging waveform is the lowest when the energy of the transmitted/received signal is concentrated around the edge of the spectrum. The baseband signal that minimizes the variance of the time-delay estimates can be expressed using s1 (t) = e−j2πδf t + ej2πδf t , (5.10) where δf represents the one-half separation of the two tones. The ideal spectrum of s1 (t) is shown in Fig. 5.1. When 2δf is equal to the maximum available bandwidth, (5.10) minimizes the CRLB, which consequently maximizes the ranging accuracy. The CRLB of s1 (t) is obtained from the following Z∞ S1 (f ) = s1 (t)e−j2πf t dt −∞ Z∞ (5.11) e−j2πδf t + ej2πδf t e−j2πf t dt  = −∞ = δ (f + δf ) + δ (f − δf ) . |S1 (f )|2 = |δ (f + δf ) + δ (f − δf )|2 = δ 2 (f + δf ) + δ 2 (f − δf ) + 2δ (f + δf ) δ (f − δf ) (5.12) = δ (f + δf ) + δ (f − δf ) . Z∞ Z∞ 2 2 (2πf ) |S1 (f )| df = 4π 2 f 2 [δ (f + δf ) + δ (f − δf )] df −∞ −∞ (5.13) = 4π 2 (−δf )2 + 4π 2 (δf )2 = 8π 2 (δf )2 . Z∞ Z∞ 2 |S1 (f )| df = δ (f + δf ) + δ (f − δf ) df = 1 + 1 = 2. (5.14) −∞ −∞ R∞ (2πf )2 |S1 (f )|2 df ζf2 −∞ 8π 2 (δf )2 = R∞ = = (2πδf )2 . (5.15) E 2 |S1 (f )|2 df −∞ 76 Figure 5.2: The minimum achievable standard deviation/RMSE for two-tone CW ranging signals for various values of E/N0 . From [10] © 2021 IEEE. 2 1 1 σt,1 = var(τ̂ − τ ) = = . (5.16) ζf2 SNR (2πδf )2 SNR E In (5.16), µ2f = 0 since the waveform is symmetric in the spectrum domain. Further, (5.16) shows the CRLB for two-tone CW signals, however, in practice, the ranging waveforms are eval- uated for pulsed signals with limited time duration, especially for the case of multi-tone signals. Thus, the spectrum of the two-tone signal is spread around the two tones, instead of having two perfect impulse functions at −δf and δf . For pulses that are long relative to the carrier period, this effect is minimal. This change in spectrum shifts the CRLB to slightly higher values, which can generally be obtained using numerical methods. Nevertheless, since the CRLB serves as an indi- cator for the maximum achievable accuracy, and since the form representing CW signals achieves the lowest CRLB, (5.16) can be used to represent any two-tone waveform where the two tones are equidistant around the center of the spectrum. The minimum achievable standard deviation/RMSE of two-tone waveforms is depicted in Fig. 5.2 for multiple E/N0 values. 77 Δf - δf 0 δf f Figure 5.3: The ideal spectrum of the two-tone waveforms expressed in (5.17). Δf Δf - δf 0 δf f Figure 5.4: The ideal spectrum of the four-tone waveforms expressed in (5.18). Δf Δf - δf 0 δf f Figure 5.5: The ideal spectrum of the three-tone waveforms expressed in (5.18). Although two-tone signals have the lowest CRLB among all other waveforms, they are rarely used in the literature for radar applications due to their ambiguous nature; in many cases, it is not possible to detect multiple targets while using two-tone ranging waveforms. Nevertheless, two- tone waveforms provide an opportunity to achieve the desired ranging accuracies for this work, especially that open-loop distributed phased arrays require very accurate range estimates. The ambitiousness of two-tone ranging waveforms is resolved by equipping the targeted nodes with active repeaters. The active repeaters amplify the received signals and retransmit them back to their source, allowing the radar to observe the target as a point source. Such an approach improves the ranging accuracy due to the applied amplification in this process and suppresses the reflections from the neighboring targets. 78 5.1.2 CRLB of Multi-Tone Waveforms In addition to accurate range estimation, in many cases it is beneficial to use waveforms that pro- vided added functionality. Given that wireless phase and frequency synchronization are of interest for coherent distributed arrays, one can use waveforms that can perform phase and frequency syn- chronization jointly. As it will be seen in the next chapter, wireless frequency synchronization is achieved in this work using signals with 10 MHz tone separation, and these tones can be modu- lated for instance on the ranging waveforms. To study the implication of modifying or adding extra modulations to a preexisting ranging waveform, the CRLB of the three ranging waveforms s2 (t) = ej2π(∆f −δf )t + ej2πδf t , (5.17) s3 (t) = ej2π(−∆f −δf )t + e−j2πδf t + ej2π(∆f −δf )t + ej2πδf t , (5.18) and s4 (t) = ej2π(−∆f −δf )t + ej2π(∆f −δf )t + ej2πδf t , (5.19) is evaluated below. Although the spectrum of these three waveforms is not perfectly symmetric, µ2f is going to be approximated as 0, which is a valid case for widely separated tones [114]. The 2 spectra of s2 (t), s3 (t), and s4 (t) are shown in Figs. 5.3, 5.4, and 5.5. σt,2 is computed as follows Z∞ S2 (f ) = s2 (t)e−j2πf t dt −∞ Z∞ (5.20) ej2π(−δf +∆f )t + ej2πδf t e−j2πf t dt  = −∞ = δ (f − (∆f − δf )) + δ (f − δf ) . |S2 (f )|2 = |δ (f − (∆f − δf )) + δ (f − δf )|2 = δ 2 (f − (∆f − δf )) + δ 2 (f − δf ) + 2δ (f − (∆f − δf )) δ (f − δf ) (5.21) = δ (f − (∆f − δf )) + δ (f − δf ) . 79 Z∞ Z∞ 2 2 2 (2πf ) |S2 (f )| df = 4π f 2 [δ (f − (∆f − δf )) + δ (f − δf )] df −∞ −∞ = 4π 2 (∆f − δf )2 + 4π 2 (δf )2 (5.22) = 4π 2 ∆f 2 + δf 2 − 2∆f δf + 4π 2 δf 2  = 4π 2 ∆f 2 + 2δf 2 − 2∆f δf .  Z∞ Z∞ 2 |S2 (f )| df = δ (f − (∆f − δf )) + δ (f − δf ) df = 1 + 1 = 2. (5.23) −∞ −∞ R∞ (2πf )2 |S2 (f )|2 df ζf2 −∞ 4π 2 (∆f 2 + 2δf 2 − 2∆f δf ) 2 2 2  = ∞ = = 2π ∆f + 2δf − 2∆f δf . E R 2 2 |S2 (f )| df −∞ (5.24) 2 1 1 σt,2 = ζf2 = . (5.25) SNR 2π 2 (∆f 2 + 2δf 2 − 2∆f δf ) SNR E 2 Similarly, σt,3 is computed as follows Z∞ S3 (f ) = s3 (t)e−j2πf t dt −∞ Z∞ (5.26) ej2π(−δf −∆f )t + e−j2πδf t + ej2π(−δf +∆f )t + ej2πδf t e−j2πf t dt  = −∞ = δ (f − (−δf − ∆f )) + δ (f + δf ) + δ (f − (−δf + ∆f )) + δ (f − δf ) . 80 |S3 (f )|2 = |δ (f − (−δf − ∆f )) + δ (f + δf ) + δ (f − (−δf + ∆f )) + δ (f − δf )|2 = δ 2 (f − (−δf − ∆f )) + δ 2 (f + δf ) + δ 2 (f − (−δf + ∆f )) + δ 2 (f − δf ) + 2δ (f − (−δf − ∆f )) δ (f + δf ) + 2δ (f − (−δf − ∆f )) δ (f − (−δf + ∆f )) + 2δ (f − (−δf − ∆f )) δ (f − δf ) + 2δ (f + δf ) δ (f − (−δf + ∆f )) + 2δ (f + δf ) δ (f − δf ) + 2δ (f − (−δf + ∆f )) δ (f − δf ) = δ (f − (−δf − ∆f )) + δ (f + δf ) + δ (f − (−δf + ∆f )) + δ (f − δf ) . (5.27) Z∞ (2πf )2 |S3 (f )|2 df −∞ Z∞ 2 = 4π f 2 [δ (f − (−δf − ∆f )) + δ (f + δf ) + δ (f − (−δf + ∆f )) + δ (f − δf )] df −∞ = 4π 2 (−δf − ∆f )2 + (−δf )2 + (−δf + ∆f )2 + (δf )2   = 4π 2 4δf 2 + 2∆f 2 .  (5.28) Z∞ Z∞ 2 |S3 (f )| df = δ (f − (−δf − ∆f )) + δ (f + δf ) + δ (f − (−δf + ∆f )) + δ (f − δf ) df −∞ −∞ = 1 + 1 + 1 + 1 = 4. (5.29) R∞ (2πf )2 |S3 (f )|2 df ζf2 −∞ 4π 2 (4δf 2 + 2∆f 2 ) = R∞ = = 4π 2 δf 2 + 2π 2 ∆f 2 . (5.30) E 2 4 |S3 (f )| df −∞ 2 1 1 σt,3 = ζf2 = . (5.31) SNR (4π 2 δf 2 + 2π 2 ∆f 2 ) SNR E 81 2 Finally, σt,4 is obtained from the following Z∞ S4 (f ) = s4 (t)e−j2πf t dt −∞ Z∞ (5.32) ej2π(−δf −∆f )t + ej2π(−δf +∆f )t + ej2πδf t e−j2πf t dt  = −∞ = δ (f − (−δf − ∆f )) + δ (f − (−δf + ∆f )) + δ (f − δf ) . |S4 (f )|2 = |δ (f − (−δf − ∆f )) + δ (f − (−δf + ∆f )) + δ (f − δf )|2 = δ 2 (f − (−δf − ∆f )) + δ 2 (f − (−δf + ∆f )) + δ 2 (f − δf ) + 2δ (f − (−δf − ∆f )) δ (f − (−δf + ∆f )) + 2δ (f − (−δf − ∆f )) δ (f − δf ) 2δ (f − (−δf + ∆f )) δ (f − δf ) = δ (f − (−δf − ∆f )) + δ (f − (−δf + ∆f )) + δ (f − δf ) . (5.33) Z∞ (2πf )2 |S4 (f )|2 df −∞ Z∞ (5.34) = 4π 2 f 2 [δ (f − (−δf − ∆f )) + δ (f − (−δf + ∆f )) + δ (f − δf )] df −∞ = 4π 2 (−δf − ∆f )2 + (−δf + ∆f )2 + (δf )2 = 4π 2 3δf 2 + 2∆f 2 .    Z∞ Z∞ |S4 (f )|2 df = δ (f − (−δf − ∆f )) + δ (f − (−δf + ∆f )) + δ (f − δf ) df −∞ −∞ (5.35) = 1 + 1 + 1 = 3. R∞ (2πf )2 |S4 (f )|2 df ζf2 −∞ 4π 2 (3δf 2 + 2∆f 2 ) 8 = R∞ = = 4π 2 δf 2 + π 2 ∆f 2 . (5.36) E 2 3 3 |S4 (f )| df −∞ 82 2 1 1 σt,4 = ζf2 = 8 2  . (5.37) SNR 4π 2 δf 2 + 3 π ∆f 2 SNR E 2 2 2 2 A comparison between σt,1 , σt,2 , σt,3 , and σt,4 is shown using a numerical example. In this ex- ample, the total signal bandwidth is set to 200 MHz (thus δf was varying depending on the wave- form) and ∆f = 5 MHz. The obtained CRLB values are as follows: σt,1 2 = 2.53 × 10−18 /SNR s2 , 2 σt,2 = 2.53 × 10−18 /SNR s2 , σt,3 2 = 2.66 × 10−18 /SNR s2 , and σt,4 2 = 2.66 × 10−18 /SNR s2 . σt,12 2 and σt,2 are equal because we considered the same bandwidth for both signals, however, if a fixed value of 100 MHz for δf is used, σt,2 2 would be equal to 2.66 × 10−18 /SNR s2 . These results show that the accurate two-tone waveform can be amended to contain one or multiple additional tones with minor degradation to the ranging accuracy, as long as δf is in the range of a magnitude larger than ∆f . The additional tones can be used in future work for wireless frequency synchronization or time synchronization. Nevertheless, let us consider that a tone separation of 10 MHz is needed for wireless frequency synchronization, if this tone separation achieves desirable ranging accuracies, a two-tone waveform can be used for simultaneous ranging and wireless frequency synchronization without any added modulation. 5.2 High Accuracy Ranging Accurate time-delay estimation (or ranging) is one of the main pillars to achieve coherent transmis- sion in distributed arrays operating without any feedback from the target or other external systems. As discussed in Chapter 2, the coherent gain is directly affected by the accuracy of the time- delay estimates, where the coherent gain can be increased by improving the localization estimates. Accurate estimates can be achieved using two-tone waveforms, since they offer optimal ranging accuracies. Two-tone CW signals were introduced in (5.10) in the continuous-time domain, how- ever, range estimation is done using the digital form of the received signals. In the case of two-tone signals, the transmitted pulsed signal can be represented by   j2πf1 fn j2πf2 fn s[n] = (u[n] − u[n − nT + 1]) e s +e s , (5.38) 83 and the received pulsed signal by r[n] = (u[n − ⌈τ fs ⌉] − u[n − ⌊τ fs ⌋ − nT + 1]) ×   (5.39) n n ej2πf1 ( fs −τ ) + ej2πf2 ( fs −τ ) + AWGN[n], where u[·] is the unit step function, nT is the sample representing the end of the pulse with a pulse width T = nT /fs , f1 and f2 are the two tones of the transmitted and received signals, which can be defined as f1 = −δf and f2 = +δf for the baseband representation or as f1 = f0 − δf and f2 = f0 + δf if the center frequency f0 is being considered. The sampling rate is represented using fs , τ is the time delay, AWGN[n] represents the additive white Gaussian noise, ⌈.⌉ maps its argument to the least integer greater than or equal to its value, and ⌊.⌋ maps its argument to the greatest integer less than or equal to its value. The accuracy of the range estimates can be further improved by utilizing matched filtering or pulse compression. The matched filter is the optimal linear filter that maximizes the SNR of the received pulses by correlating the received signals with the transmitted signals. This approach maximizes the SNR by providing an additional processing gain that is represented by the time- bandwidth products, which is defined by T BWr , where BWr represents the receiver bandwidth that can be determined from the sampling frequency. The matched filter output is obtained from y[n] = r[n] ∗ s∗ [−n] = IFT {R[k]S ∗ [k]} , (5.40) where R[k] and S[k] are the discrete Fourier transforms of r[n] and s[n], ∗ represents convolution, ⟨∗ ⟩ represents complex conjugate, and IFT {·} is the inverse Fourier transform. Time-delay estimation is done by determining the peak of the matched filter output. However, determining the peak is a challenging task, given that the peak needs to be estimated from a discrete matched filter output. To visualize this challenging task, Fig. 5.6 shows the same received two- tone pulse with two different discretization instances (the noise/interference was set to zero in this example), along with the matched filter output for the two scenarios. The matched filter obtained from the second set of discretization points (which represents the usual case) illustrates the com- plications faced while estimating the peak of the matched filter output. Depending on the delay of 84 (a) (b) Figure 5.6: (a) The in-phase part of a two-tone signal with f1 = 53 kHz, f2 = 1.03 MHz, and a sampling frequency of fs = 2.5 MHz showing discretizations equivalent to two different delays. (b) Output of the matched filter for both scenarios, showing the error due to discretization. the received signal, which produces a different version of the discretized received signal/matched filter output, the main lobe of the discretized filter output is truncated differently. Also, it can be seen from Fig. 5.6b, that the matched filter output of two-tone signals is highly ambiguous. This could lead for the cases similar to the second set of discretization points to misidentification of the peak, given that the sidelobes can have higher peaks due to discretization. This problem was resolved in [115], where a disambiguation pulse was transmitted between every two ranging pulses in order to detect the main peak of the matched filter output. For cases where ranging is done between nodes with small relative motion, such as drones flying in formation, the disambiguation pulses can be transmitted in between multiple ranging pulses to speed up the 85 ranging process. The discrete form of the transmitted disambiguation pulse can be represented by f2 sdis [n] = u[n] − u[n − 4fs2 /f2 + 1] ej2π 4fs n .    (5.41) As can be seen, the pulse width is equal to one cycle of the transmitted frequency f2 /4. Since only one cycle of this frequency is transmitted, the matched filter output of the disambiguation pulse consists of only one lobe. Thus, the output of the matched filter is unambiguous and by estimating its peak, it is possible to determine the main lobe of the matched filter output of the ranging pulse. However, given that the pulse width of the disambiguation pulse is considerably smaller than that of the ranging pulse, the time-delay estimation of the disambiguation pulse is very coarse, hence the ranging pulse is needed to determine the target range with high accuracy. The frequency f2 /4 was selected for the disambiguation pulse so that the transmitted pulse is long enough to produce a desirable time-bandwidth product (even lower frequencies can be selected, depending on the frequency f2 ). Consequently, the peak of the matched filter output of the disambiguation pulse will have multiple discretization points on the output lobe; at least 8 points in the case where f2 is too close to fs . This is needed since the output lobe from the disambiguation pulse is much wider than the lobes of the two-tone waveform. Further analysis on the disambiguation pulse is available in [115]. An example of the disambiguation of a two-tone pulse is shown in Fig. 5.7, where a two-tone waveform with frequencies f1 = 20 kHz and f2 = 7.52 MHz was used along with a disambiguation pulse with a 1.875 MHz frequency. Since one cycle of the disambiguation pulse was transmitted, the pulse width of the disambiguation pulse was 533 ns. 5.2.1 Minimization of Ranging Biases Generally the time-delay estimation variance cannot reach the lower bound due to signal discretiza- tion, and hardware-induced errors, among other factors. Furthermore, estimation algorithms im- pact the ranging performance and can lead to increased variance and bias. Two-/multi-tone wave- forms are prone to bias errors especially when the tones are close to zero or the Nyquist rate. As mentioned earlier, time-delay estimation or ranging is done by estimating the peak of the main 86 (a) (b) Figure 5.7: (a) Two-tone ranging pulse with f1 = 20 kHz and f2 = 7.52 MHz, along with a dis- ambiguation pulse with fd = 1.875 MHz. These signals were sampled at 25 MSps. (b) Output of the matched filter for both the ranging pulse and the disambiguation pulse. From [10]. lobe of the matched filter output, and given that estimation process is deals with discretized sig- nals, almost always, the peak resides between two discretization points. Estimating the actual peak is using multiple methods, where interpolation is one of the most common ones. Although generally interpolation provides accurate estimates, it tends to be slow, hence in [116] the nonlin- ear least squares sinc fitting (NL-LS) was used for ranging. NL-LS is a common peak estimator, and it has been used in the literature for hyperbolic and Gaussian functions [117–119]. Although these common peak estimator perform very well for most of the ranging waveforms, it is shown in 87 this dissertation that in many cases, they produce biased estimates. Thus, the new peak estimator matched filter least-squares (MF-LS) was developed to minimize the bias. 5.2.1.1 Peak Estimators Estimation Using Interpolation As the received signal is being captured, it is discretized with time intervals Ts = 1/fs . Thus, if no other peak estimator is used to detect the actual peak, a coarse grid with Ts time spacing is formed, and the time-delay estimation will have an extremely low accuracy; if the sampling rate is set to 20 MSps for example, the time-delay of the received pulse will have a coarse grid of 50 ns, yielding a range grid of 7.5 m. Obviously, if no extra processing is done, it is not possible to use these estimates to synchronize an open-loop coherent distributed array. Refinement can be done by interpolating L samples preceding and following the initial time-delay estimate. The initial time-delay estimate is obtained from the coarse estimate of the peak npk of the matched filter by evaluating τc = npk Ts . Since the matched filter output has a sinusoidal behavior, a spline interpolation with K points between each two points is desirable. Thus, the output of this spline function will contain 2LK + 1 samples. Once interpolated, a refined time shift to the initial time-delay estimate is calculated from τr = k0 Ts /K, where k0 is the peak of the interpolated waveform with 2LK +1 samples. By adding τr to τc , the range is obtained from R = c (τc + τr ) /2. The interpolated waveform will have a large number of samples, especially with large L values. Nevertheless, if no disambiguation techniques are used to detect the main lobe, L has to be large in order to have a higher chances to detect the main lobe. However, if a disambiguation pulse is used, L can be set as low as 1, but in certain cases, a higher value of L can be desirable to minimize the bias. Estimation Using Sinc Nonlinear Least Squares (NL-LS) As demonstrated in [116], NL-LS can be implemented efficiently for LFM and pseudo-noise coded ranging waveforms. The main advantage of this approach is that it is a fast algorithm with much lower complexity than inter- polation, mainly because only three points are required to estimate the refined peak of a given 88 Input: f1 , f2 , fs , y. Output: τe , which is the estimated time delay. λ0 = y1 ; λ1 ≈ 0; λ2 = 2(f2 − f1 )/fs ; iter = 5 (preferably iter ≥ 4); for m = 1 to iter do  f (x;λm ) f (x;λm ) f (x;λm ) J= λ0 , λ1 , λ2 ; ∆y = y − f (x, λm ); −1 T ∆λ = J T J J ∆y; λm+1 = λm ∆λ; end λ̃ = λm+1 ; ñpk = npk + λ̃1 ; τe = ñpk /fs ; return τe Algorithm 5.1: NL-LS waveform. The main objective of NL-LS is to estimate the peak by minimizing the residual error using X 2 L= (yi − f (xi , λ))2 , (5.42) i=0 where f (x; λ) = λ0 sinc((x − λ1 )λ2 ), (5.43) y : yi = |d [npk−1+i ]| i ∈ {0, 2}, (5.44) x = [−1, 0, 1]T , (5.45) λ = [λ0 , λ1 , λ2 ]T , (5.46) where yi are the samples from y[n] around the estimated peak, f (xi , λ) is the predicted matched filter output main lobe for the samples xi around the peak, x contains the elements xi representing the relative location of the matched filter output samples to the peak of the matched filter output, d [npk−1+i ] represents the amplitude of the matched filter output around the peak npk , and λ is the vector of the coefficients. Gauss–Newton optimization, which uses the following gradients with 89 Input: nT , T , f1 , f2 , fs , s[n], y[n], H. Output: τe . Obtain the initial peak estimate npk ; for h = 1 to H do dth = npk Ts − 0.5/(f2 − f1 ) + (h/H)/(f2 − f1 ); th [n] = −dth : 1/fs : 2(nT − 1)/fs − dth ;  r̃[n] = (u[th ] − u[th − T ]) ej2πf1 th + ej2πf2 th ; ỹ[n] = r̃[n] ⊛ s∗ [−n]; N (y[n]/max(y) − ỹ[n]/max(ỹ))2 ; P RSS[h] = n=0 end Using the interpolation algorithm, estimate hmin , the refined value that minimizes RSS; Local time delay: dtloc = −0.5/(f2 − f1 ) + (hmin /H)/(f2 − f1 ); Estimate the time of arrival using: τe = npk Ts + dtloc ; return τe Algorithm 5.2: MF-LS respect to the model parameters, is used to solve (5.42) f (x; λ) = sinc(λ2 (x − λ1 )), (5.47) λ0 f (x; λ) λ0 [sinc(λ2 (x − λ1 )) − cos(πλ2 (x − λ1 ))] = , (5.48) λ1 x − λ1 f (x; λ) λ0 [cos(πλ2 (x − λ1 )) − sinc(λ2 (x − λ1 ))] = . (5.49) λ2 λ2 NL-LS peak estimation steps are shown in Algorithm 5.1. More details are shown in [116]. In general, this approach is fast and gives reliable estimates in terms of accuracy and bias for many waveforms, but as shown later in this section, this approach suffers from significant bias for two- tone waveforms with bandwidths that are small or near the Nyquist frequency. Estimation Using Matched Filter Least Squares (MF-LS) MF-LS was designed to fit any ranging waveform, where the algorithm works by comparing the computed matched filtered output to a set of H potential responses, where each of the considered responses corresponds to a certain shift in the discretization points. The logic behind this method can be visualized in Fig. 5.6; depending on when was the pulse received relatively to the discretization points, the matched filter output will have a varying response. By comparing the obtained response to a set of candidate responses, it is possible to determine the time delay. This method should always work as long as 90 the transfer function that transforms the transmitted signal to the received signal can be determined. Once the obtained matched filter output is compared to the H potential candidates, the normalized residual sum of squares (RSS) is used to asses the every potential response using XN RSS[h] = (y[n]/max(y) − ỹ[n]/max(ỹ))2 , (5.50) n=0 where N + 1 is the total number of samples of a given waveform. ỹ[n] = r̃[n] ∗ s∗ [−n] represents the predicted value of y[n], and r̃[n] represents the predicted discretized version of the received signal for every predicted time shift dth (as shown in Fig. 5.6a). The values of dth are selected such that a linear vector (equally sampled) of candidate time delays around the coarse estimate of the matched filter output npk is considered. By using (5.50) for all H potential responses, a curve similar to the ones shown in Fig. 5.8 is obtained and used to determine the time delay by employing the method of least squares. For example, with H = 50 potential matched filter outputs, such as in Fig. 5.8, the refined estimate of the local time delay dtloc is obtained using interpolation. Afterwards, the estimated time de- lay of the pulse τe is obtained using τe = npk Ts + dtloc . The steps for the MF-LS time delay estimator are shown in Algorithm 5.2. Larger H leads to better results with lower bias generally, since more potential outcomes are evaluated. This algorithm, as will be shown below, reduces the biases for two-tone ranging, as it evaluates multiple outcomes with varying relative times for the discretization points compared to the received ranging pulses. However, MF-LS has a much higher computational complexity compared to both interpolation and NL-LS. Nevertheless, if biases are observed in the estimates for a given ranging waveforms, using a peak estimator with a higher computational time to obtain much more accurate results is a desirable option. 5.2.1.2 Comparison Between the Peak Estimators The three presented peak estimators were compared using simulation and experiment for ranging using two-tone waveforms. The experimental setup is shown in Fig. 5.9. Simulations were carried out using MATLAB, and the measurements were conducted using an Ettus Universal Software 91 Figure 5.8: RSS of the normalized matched filter outputs for three time delay cases for a ranging waveform with f1 = 20 kHz, f2 = 4.5 MHz, fs = 10 MHz, and T = 99.8 µs. The three selected time delay cases reside within the same two discretization points. Every expected matched filter output h corresponds to a predicted time shift dth . Radio Peripheral (USRP) X310 SDR with a UBX-160 daughterboard, which was controlled us- ing LabVIEW 2018. Instead of using antennas for ranging, cables were connected between the transmitter and receiver in order to have a controlled environment where the ground truth can be extracted accurately. The time delay was controlled by adding or removing 3.048 m coaxial cables. Simulation and experimental results are shown in Figs. 5.10, 5.11, and 5.12. In the simulations and experiments we used f1 = 20 kHz, fs = 10 MHz and f2 was varied from 60 kHz to 4.94 MHz. The pulse width was set to 99.8 µs and the carrier frequency was 2.8 GHz. The results were ob- tained by averaging 100 estimates. The errors that are observed for each algorithm in Figs. 5.10, 5.11, and 5.12 are clearly shown in Fig. 5.13. As can be seen, the highest bias was obtained from the interpolation approach, regardless of the bandwidth of the received waveform. As for NL-LS, lower biases were observed, however, once the bandwidth of the pulse started increasing, the bias increased considerably. As for the MF-LS approach, much lower bias values were observed aside from a small number of special cases where only bandwidths that very small or very close to the Nyquist frequency (5 MHz) showed considerable biases. A clear comparison between the RMSE of the estimates of each method is shown in Table 5.1. Two bandwidth intervals were compared; the first interval contains all the considered bandwidths, 92 30 dB TX USRP X310 Daughterboard 1 Cables with 4.92 varying length MHz 𝑓𝑐 𝑓1 𝑓2 RX Figure 5.9: Experimental SDR setup. Extra cables were added to modify the received signal time delay. which are from 40 kHz to 4.92 MHz, whereas the second interval contains the bandwidths from 2 MHz to 4.5 MHz, representing a more relevant bandwidth for ranging signals. The values in Table 5.1 demonstrate that the MF-LS algorithm reduces errors by nearly an order of magnitude for 2 MHz to 4.5 MHz bandwidths compared to the other considered algorithms. Although inter- polation and NL-LS showed higher biases, they can still be used for two-tone ranging, especially if the total considered displacement distance is small compared to the rate of change for the biases (few centimeters for example), in this case the initial bias can be calibrated out. The standard deviation of every waveform and algorithm along with the CRLB are shown in Fig. 5.14. The CRLB curve common for all the peak estimators as it is only dependent on the ranging waveform and the SN R. Every 100 samples, the standard deviation was calculated for the same cable length, but for varying bandwidths. The CLRB increases for every 100 samples, as for longer cable lengths, the SNR drops.It can be seen that the standard deviations of all the estimators were similar. The increased standard deviation of MF-LS compared to the other algorithms around the regions of high bandwidths was due to the biases that were observed by interpolation and NL-LS which caused the standard deviation to drop below the CRLB, which is a metric used to evaluate unbiased estimators. 93 Figure 5.10: Simulated and experimental results showing a comparison between the actual dis- tances traveled by the signals and their estimates. The surface shows the simulated results, and the red dots show the experimental results of interpolation. From [11] © 2021 IEEE. Figure 5.11: Simulated and experimental results showing a comparison between the actual dis- tances traveled by the signals and their estimates. The surface shows the simulated results, and the red dots show the experimental results of NL-LS. From [11] © 2021 IEEE. 5.2.2 Scalability of Two-Tone Waveforms Two-tone ranging waveforms offer optimal performance, however they are not scalable in nature. In [120], two-tone stepped-frequency waveform (TTSFW), a modified version of the two-tone 94 Figure 5.12: Simulated and experimental results showing a comparison between the actual dis- tances traveled by the signals and their estimates. The surface shows the simulated results, and the red dots show the experimental results of MF-LS. From [11] © 2021 IEEE. Table 5.1: Ranging RMSE for the three ranging algorithms. Bandwidth RMSE for interpolation RMSE for NL-LS RMSE for MF-LS 40 kHz to 4.92 MHz 0.688 m 0.482 m 0.407 m 2 MHz to 4.5 MHz 0.78 m 0.421 m 0.056 m waveform that combines accuracy with scalability, was introduced. TTSFW is formed by multiple two-tone encoding pulses where each pulse has a predetermined frequency step. By varying the frequencies of the two-tones and including multiple pulses, TTSFW offers the desired scalability. TTSFW is expressed as Np −1   1 X t − nTr ej2πf1 t + ej2πf2 t ej2πnδfp t  sT T SF W (t) = rect (5.51) Np n=0 T where Np is the number of encoding pulses, Tr is the nonzero portion of the pulse duty cycle, and δfp is the step frequency expressed as BW ψf = (5.52) 2Np − 1 95 (a) (b) Figure 5.13: (a) The errors from Figs. 5.10, 5.11, and 5.12 for the various peak estimation algo- rithms for different cable lengths and bandwidths (average of 100 estimates). (b) Time delay based on the cable length (dotted orange line) and signal bandwidth (solid blue line) per sample. where BW is the total waveform bandwidth, not to be confused with the bandwidth of every separate encoding pulse. The first tone is usually set to a low value, and the second tone is obtained from f2 = f1 + ∆fp , where ∆fp = Np δfp . A total of Np encoding pulses allow TTSFW to service Np ! connections. A time domain and time-frequency spectrum representation of the TTSFW is shown in Figs. 5.15 and 5.16 respectively. 96 Figure 5.14: The standard deviation for each ranging method along with the CRLB. The SNR for the first 100 samples was 32 dB and then it decreased by around 2.9 dB for every added cable (every 100 samples), reaching an SNR of 14.6 dB. For some samples, the standard deviation of both the interpolation and NL-LS is smaller than the CRLB, this is due to the bias in the estimates. From [11] © 2021 IEEE. The CRLB of TTSFW is computed from 1 var(τ̂ − τ ) ≥ ζf2 , (5.53) E SNR Assuming that the waveform is symmetric in the spectrum domain. ζf2 /E is computed from 2 N −1 ζf2 (2πBW )2  BW X = π2 + n2 . (5.54) E 2 − N1 N (4N 2 + 4N + 1) n=0 In addition to the scalability of TTSFW, this waveform reduces the ambiguities as observed in Fig. 5.17. Further details about TTSFW are available in [120]. 97 Figure 5.15: Time domain of a TTSFW with Np = 5 and 50% duty cycle. From [7] © 2021 IEEE. Figure 5.16: Time-frequency spectrum of a TTSFW with Np = 5 and 50% duty cycle. From [7] © 2021 IEEE. 5.3 Adaptive Ranging Accurate time delay-estimation is needed for open-loop coherent arrays, nevertheless, the require- ments differ depending on the application. If the nodes are moving in one dimension and the frequency synchronization is not achieved wirelessly, it was shown that an average standard devia- 98 Figure 5.17: Matched filter output for TTSFW with Np = 4 compared to the matched filter output of pulsed two-tone waveforms (PTTW). From [7] © 2021 IEEE. tion of λc /15 for range estimates is sufficient to achieve 0.9 coherent gain with a high probability. Hover, different metrics are needed for the cases where frequency synchronization is achieved wirelessly, which is the usual case. In such architectures, a ranging standard deviation of at least λc /27 is desirable when a coherent gain of 0.9 is targeted. Further, much more accurate estimates are needed when the nodes are moving in two dimensions, and the metrics are dependent on the employed localization approach and on the spread of the nodes. Thus, the desired time-delay estimation or ranging accuracy constantly changing and the pa- rameters of the ranging waveform need to also be changing to overcome the variations of the SNR. For a given pulse width, one can select the highest possible bandwidth to maximize the time- delay estimation accuracy, ensuring that the desired requirements for accurate phase alignment are met. Nevertheless, using very high bandwidth introduces interference for other communications or sensing systems operating on neighboring bands and leads to an overly-crowded radio spectrum. Generally, resource allocation is always a concern in any system, especially for RF systems; resources can be in the form of power [121, 122], time [123], and bandwidth [10, 12, 124, 125], among others. For instance power is an issue when there is a constraint on the power budget, limiting the localization antennas and the transmitted power [121]. There are also time constraints on the ranging pulse width, where a ranging rate of at least 200 Hz is desired for nodes with strong 99 vibration profiles, such nodes can represent transmitters on drones or helicopters [123]. Spectral footprint or radio spectrum crowdedness was addressed differently depending on the scenario, for instance, cognitive radio networks transmitting OFDMA signals maintained the same performance in [124, 125] while decreasing the used bandwidth at the expense of higher transmitted power. On the other hand, some systems focus on maximizing the bandwidth of the transmitted signals while minimizing the interference [126, 127]. Other cognitive systems, once greatly affected by interference, relocate their signals to other desirable bands [128, 129]. This section focuses on minimizing the bandwidth of the ranging signals to reduce the inter- ference that can be observed on neighboring bands. The bandwidth is controlled such that the coherent gain, if impacted, is gracefully degraded. A framework ensuring that a desired ranging accuracy is obtained while minimizing the spectral footprint is developed. Such approach allows the adaptive ranging system to adapt to changing weather conditions by achieving the desired ranging accuracy while maintaining a minimal bandwidth. 5.3.1 Using the Standard Deviation of Multiple Range Estimates as Feedback 5.3.1.1 Adaptive Ranging Framework This section focuses on adaptive ranging approaches that implement a perception/action framework such as in [130, 131] to adjust the bandwidth of the transmitted waveforms, enabling the adaptive ranging system to overcome weather changes while minimizing the spectral footprint. This frame- work is generally used in the literature for clutter rejection (e.g. [132, 133]), and target recognition and tracking in challenging conditions (e.g. [134, 135]), nevertheless it has not previously been used for cooperative inter-node ranging. As mentioned earlier, (5.16) allows us to determine the minimum achievable variance for a non-biased time delay estimator. Thus, knowing the SNR and waveform parameters is enough to determine the CRLB. However, in practice it is not possible to attain the CRLB due to spuri- ous signals, quantization errors, and hardware-induced nonlinearities. Also, the SNR is constantly 100 Control Action TX Waveform PI Controller Transmitter Generator Environment & Signal Perception Return RX Range Error Range Estimation Receiver Estimation Distributed Beamforming Processing Figure 5.18: Adaptive inter-node ranging architecture that relies on the standard deviation of the range estimates. From [10] © 2021 IEEE. changing in practice, due to the array dynamics, interference, and antenna gain, among others. All these changes to the channels make it impossible to select a constant bandwidth for the ranging waveform that minimizes the spectral footprint and achieve the desired accuracies. This section focuses on two-tone signals due to all the benefits that were mentioned earlier in this chapter. To overcome all the dynamic changes, the tone separation is modified continuously to prevent any considerable drop in accuracy and to minimize the allocated bandwidth. It is important to note that even though two-tone waveforms are spectrally sparse in nature, and other signals can be transmitted between the two-tones, this section considers the effects on systems that cannot perform frequency hopping, which for this case, consuming larger bandwidths leads to overlap- ping frequencies and consequently reduction in performance especially if the affected signals are communications signals (since they tend to have lower power than sensing signals). As depicted in Fig. 5.18, the adaptive ranging approach is based on a perception/action frame- work. The transmitter (TX) and receiver (RX) are placed on the node that is estimating the range of its target, the node can be designated as either primary or secondary. The signal returns con- 101 stitute the reflected pulses from the target (primary or secondary). Based on the framework that was introduced in Chapters 1 and 2, it can be assumed that this adaptive ranging system is placed on a secondary node and the targeted node is the primary. Also, the target can be represented by an active repeater, or a corner reflector, which represents the case where no amplification is done to the transmitted ranging pulses. At first the radar transmits a ranging pulse with a desired initial bandwidth. The reflected pulses from the target are then received. The received returns can have the same carrier as the transmitted pulses or they can have another carrier in the case where a re- peater is used. These returns (one or multiple returns) are then processed in the perception step where certain properties such as standard deviation are extracted. Range estimation is also per- formed in the perception step, which is used at a later stage for phase alignment among the nodes in the array. Afterwards, the extracted information is fed to a control step that determines the de- sired waveform modifications. These desired modifications are carried to the action step, where the transmitted waveform is adjusted based on the desired action. Finally, a new set of ranging pulses with the modified characteristics are retransmitted. This cycle runs continuously to achieve the desired ranging performance with minimal bandwidth. The three adaptive ranging processes operate as follows: Perception Process The perception process focuses on extracting the information of interest from the received signals. These information consist of both identifying the range of the target, and determining the desired metrics that are needed for the decision making in the control process; for the proposed approach in this section, the needed metric is the standard deviation of the range estimates. The standard deviation is calculated using r Σ(Ri − R̄)2 σR = , (5.55) I where Ri is the ith range of a batch of I range estimates, and R̄ is the mean of the range estimates in that batch. In this section, A processing interval refers to the process of capturing, determining, and processing I range estimates, than evaluating both the control and action processes. 102 The ranging is achieved as described in Section 5.2 using two-tone waveforms with varying bandwidth. Interpolation was used for peak estimation. Although in many cases the biases obtained from interpolation are beyond the acceptable error range, this is not an impediment since the same adaptive ranging approach can be implemented by either modifying the transmitted waveform or the peak estimator. Control Process In this step, the calculated standard deviation is compared to a reference desired value, and the difference is fed to a controller that in turn modifies the bandwidth of the transmitted waveform in order to get closer to the desired ranging accuracy. In this work, the closed-loop Proportional-Integral (PI) controller, which is derived from the PID controller is used. This widely used controller has a relatively simple architecture is generally used for robotics application. PID controllers have also their share in sensing systems, at it has been used for multiple applications such as imaging [136], and radar missile guidance [137, 138]. PID controllers output a correction signals based on the observed error (difference between the controller input and the reference), the correction is a combination of the proportional, integral, and derivative actions. The general discrete form of a PID controller is given by      ∆t Td 2Td Td x[n] = x[n − 1] + Kp 1 + + e[n] − 1 + e[n − 1] + e[n − 2] (5.56) Ti ∆t ∆t ∆t 1 where x[n] = 2δf is the output of the controller, which in this work is inversely proportional to the frequency separation of the tones in the pulsed two-tone waveform. This choice was made 1 since the standard deviation of the range measurements is a function of 2δf as shown in (5.16). The error term e[n] represents the difference between the measured time delay standard deviation and the reference value, in the next paragraphs. The proportional gain is represented by Kp , ∆t = tn − tn−1 is the time interval, Ti is the integration time, and Td is the derivative time. A simplified representation of the controller is shown in Fig. 5.19. The proportional action in the PID controller is tuned by the proportional gain Kp , which maps the input error to a desired output using a linear fashion. An instantaneous large error leads to an instantaneous high action on the output, and vice versa. On the other hand, the integral action, 103 𝑟[𝑛] + 𝑒[𝑛] 𝑥[𝑛] 𝑦[𝑛] PID Ranging ∑ Controller System − Figure 5.19: A simplified representation of the used PID controller. In this figure, y[n] represents the system output, which is the standard deviation of the time-delay estimates of multiple consec- utive ranging pulses. which is controlled by the integration time Ti removes any steady state error that would be observed in the case where only a Proportional (P) controller was used. As it can be inferred, the integral action has an exponential behavior, the longer the error lingers, the higher is the integral action. Thus for high Ti values, the system will have overshoots, especially if the observed errors has a tendency to last for a long duration. Finally, the derivative action is lined to the derivative time Td . The derivative action monitors and damps the integral action to reduce the overshoots. The derivative action counteracts any abrupt action from the other two parts of the controller, ensuring a smooth operation. Nevertheless, in unstable environment, the derivative action tends to add to the error instead of minimizing it. This behavior can drive the system to instability, and in many cases, such as in this dissertation, the derivative action can be disregarded. Every part of the controller has a specific action and all parts are important for different situ- ations. The proportional controller is always required for an appropriate response. Similarly, the integral action is needed to remove any steady state error. However, the derivative action can be disregarded in this work to ensure stability in harsh environments, such as rain. The derivative action can be removed by simply setting Td to zero in (5.56). The PI controller can be expressed as    ∆t x[n] = x[n − 1] + Kp 1 + e[n] − e[n − 1] (5.57) Ti As explained, PID controllers have three gain parameters (Kp , Ti , and Td ) that determine the behavior of the controller. Determining the correct values for these parameters is crucial for a smooth and stable operation of the controller. Tuning PID controllers can be done through various 104 methods as explained in [139], and these methods are generally classified in three groups: • Tuning using plant model: in this case, an accurate representative model of the plant being controlled is needed. The model is usually expressed in terms of one or multiple transfer functions. The controller parameters are chosen by examining the closed-loop poles. • Tuning using approximate model: prior to selecting the closed-loop behavior, a model de- scribing the plant needs to be approximated. There are multiple techniques for doing so, as estimating a model using a first order model with delay. Also, the model could be inferred by collecting pair of inputs and outputs, and then the model can be estimated using a desired system identification technique. However, inaccurate representation of the model is common for such approaches since in many cases not all the system behaviors are described in the collected input/output data. • Online tuning: This method is based on tuning the controller parameter online without rely- ing on models. Prior tuning, there should be a knowledge on the behavior of the plant and its limits. For the presented application, the ranging system cannot be described using accurate transfer functions, which makes the online tuning methods a logical candidate. The Ziegler-Nichols PID tuning method [67] is commonly used for online tuning. During the tuning phase, only the pro- portional gain Kp is modified while both Ti and Td are set to zero throughout the tuning process. Kp is first set to zero, afterwards, small increments are added until it reaches an optimal value Ku , that makes the system critically stable. During this operation, the output should show oscillations with constant frequency and magnitude. At this point, the oscillation period Tu is obtained and the controller gains are selected. For the case of PI controllers, the weights are selected using Kp = 0.45Ku , (5.58) Tu Ti = , (5.59) 1.2 Td = 0. (5.60) 105 Node A Daughterboard 1 Node B REF IN TX RX USRP X310 Represented by a TX corner reflector REF OUT RX Daughterboard 2 Daughterboard 1 - TX Chain Lowpass filter Mixer 𝐼𝑇𝑋 DUC DAC Transmit Control Splitter VCO REF IN PLL 0° 90° Variable 𝑄𝑇𝑋 DUC DAC amplifier Daughterboard 2 - RX Chain 𝐼𝑅𝑋 DDC ADC Receive Control REF IN PLL 0° 90° 𝑄𝑅𝑋 DDC ADC Figure 5.20: Block diagram representing the hardware used for the indoor adaptive ranging exper- iment. From [12] © 2021 IEEE. Action Process In this last step, the controller output, which dictates the new desired tone sepa- 1 ration through x[n] = 2δf , is used to design the new ranging waveform. In this step the new tone f2 is calculated and the new ranging waveform is generated in real time for continuous operation of the radar. One SDR is sufficient for this task since it has the capabilities to generate the new waveform, transmit it, receive it back, and process it, all in real time. The block diagram of the used hardware is shown in Fig. 5.20. 106 Figure 5.21: Experimental setup for adaptive ranging system measurements. 5.3.1.2 Adaptive Indoor Ranging Experiment Experimental results were gathered using a USRP X310 SDR with two UBX 160 daughterboards. The carrier frequency was set to 4.2 GHz. The transmitting and receiving daughterboards were connected to two horn antennas with an operating range of 3.85 GHz - 5.85 GHz and a gain of 15 dBi each. The maximum allowed tone separation was set to 3 MHz with a sampling frequency equal to 10 MHz and pulse width equal to 1.2 ms. The pulse repetition interval (PRI) was set to 2 ms which is long enough for most of the applications. The transmitted pulses were reflected using a corner reflector placed 4.5 m away from the antennas. The preprocessing SNR (prior to the matched filtering) and standard deviation were calculated in windows of 40 pulses. Fig. 5.21 shows the experimental setup. As described earlier, the Ziegler-Nichols tuning method was applied and Fig. 5.22 shows the ranging system response to the gain Ku . In this tuning process the reference standard deviation of time estimates was set to 7.5 × 10−3 m; this value was selected since it is close to the maxi- mum achievable frequency, in order to allow the PI to have appropriate response. The sampling frequency was set to 10 MHz and the maximum tone separation was limited by the sampling © 2021 IEEE. Section 5.3.1.2 is reprinted with minor modifications, with permission, from Section V of "S. R. Mghabghab, and J. A. Nanzer "Adaptive Internode Ranging for Coherent Distributed Antenna Arrays," IEEE Transactions on Aerospace and Electronic Systems, vol. 56, no. 6, pp. 4689-4697, 2020". 107 Figure 5.22: Ranging system behavior to the proportional gain Ku . time. This proportional gain was achieved for Kp = Ku = 50000 with an oscillation period of Tu = 6.125 processing intervals. A processing interval is equal to the duration of the 40 received pulses (1.2 ms each) plus the required processing time; the total processing interval was equal to 2 s. This time latency was due in part to data transfers between the SDR and the host computer running the adaptive processor, and was not optimized for this work. Once Ku and Tu were ob- tained, the PI controller gains were set to Kp = 22500 and Ti = 5.1 processing intervals. If e[n] was a function of the ranging standard deviation instead of the standard deviation of the time-delay estimates, different values for Ku and Tu would have been obtained. Two tests were conducted to evaluate the adaptive ranging approach; where the standard de- viation of the range estimates was evaluated for various SNR conditions. In the first test, the adaptively controlled waveform was compared to two static waveforms with tone separations of 5 kHz and 3 MHz (the minimum and maximum allowed tone separations), as shown in Fig. 5.23. At first, the SNR of the received pulses is at around 50 dB, afterwards, the transmitted power was reduced every 50 processing intervals. The SNR was estimated by first measuring the noise power when no pulses were transmitted, then, the received pulse power was measured, and at the end, the 108 SNR was obtained by calculating the ratio. For the adaptive ranging system, it is expected that the tone separation will increase in response to the decreasing SNR. At first, with high SNR, the tone separation is close to 5 kHz and with decrease in the received power, the bandwidth increases until reaching the maximum tone separation of 3 MHz. As for the cases where the tone separation is kept constant, when the tone separation is 5 kHz, the accuracy is close to the reference but contin- ually degrades as the SNR decreases. For the 3 MHz tone separation waveform, the accuracy is better than necessary until the 200th processing interval, when the SNR degrades too far. At points prior to the 200th processing interval, the waveform uses far more bandwidth than needed for the desired accuracy, needlessly occupying spectrum. The adaptively controlled waveform starts at a narrow tone separation that gradually increases as the SNR degrades. The resulting standard deviation closely follows the reference value until the 200th processing interval, when the SNR degrades so far that the necessary tone separation is beyond the 3 MHz limit. The adaptive ranging system thus achieved the desired ranging accuracy without occupying unnecessary spectrum. In the second experiment, the transmitted power was set to a constant value with an SNR of 37 dB for the received signal, and the reference point was modified to show how the adaptive ranging system behaves under these conditions. Fig. 5.24 shows the accuracy of the adaptive ranging system in following the desired reference standard deviation along with the modifications applied on the tone separation to achieve this adaptive task. With even the specified minimal tone separation of 3 MHz, the system was able to adapt to a reference standard deviation of 3 mm; the minimum achievable standard deviation was 1.5 mm when the reference was set to 0.3 mm, which dictated a tone separation beyond the 3 MHz limit. The received signal in Fig. 5.24 has relatively high SNR values when considering non-cooperative ranging systems (around 37 dB), however it is important to note that since in the proposed system architecture an active repeater is used on every node to amplify and transmit back the received signal to its source, only one-way free-space path losses (FSPL) are incurred instead of two-ways. Considering that the active repeater is retransmitting the pulse at the same initial power, the use of active repeaters helps keeping the operation in the moderate to high SNR region, where the 109 Figure 5.23: The behavior of the adaptive ranging system for a varying SNR. accuracy of ranging system is dependent on the tone separation. The FSPL for isotropic antennas can be found from the Friis transmission formula, from which is obtained  2 4πRf FSPL = , (5.61) c with f representing the transmitted carrier frequency of the ranging signals. 110 Figure 5.24: The behavior of the adaptive ranging system for a varying reference point. 5.3.1.3 Adaptive Outdoor Ranging Experiment In this section, the adaptive ranging framework is explored outdoors over a 90 m link. The corner reflector that was used for the indoor testing was replaced by an active repeater to generate a similar behavior to what can be expected in an operational open-loop coherent distributed array. In addition, wireless frequency synchronization was implemented between the radar and repeater. A 111 Node A 𝐼𝐴 (𝑡) 𝜆𝐴 0° 90° 𝑄𝐴 (𝑡) Node B 𝐼𝐵 (𝑡) 𝜆𝐵 ≠ 𝜆𝐴 0° 90° 𝑄𝐵 (𝑡) (a) Node A 𝐼𝐴 (𝑡) 𝜆𝐴 Frequency Transfer 0° 90° Ranging 𝑄𝐴 (𝑡) Node B Repeater 𝐼𝐵 (𝑡) 𝜆𝐵 = 𝜆𝐴 Frequency Locking PLL 0° 90° Circuit 𝑄𝐵 (𝑡) (b) Figure 5.25: (a) Without phase and frequency synchronization, the nodes in the array cannot align their phases, as it is impossible to prevent the frequencies on each node to relatively drift, and even when the frequency seems to be stable, it is not possible to transmit coherent phases. (b) Once both phase and frequency are synchronized, it is possible to transmit a comment coherent frequency to the target, and ranging is part of localization that enables phase alignment. From [10]. block diagram summarizing the importance of phase (through localization, which depends on time delay) and frequency synchronization for a coherent operation between the transmitting/receiving nodes is shown in Fig. 5.25. 112 Although this chapter does not focus on wireless frequency synchronization, a brief descrip- tion of the used circuitry and approach is given. As mentioned in Chapter 1, wireless frequency synchronization, or frequency synchronization in general is essential for the operation of coherent distributed arrays, whether closed loop or open loop is implemented, otherwise it is not possible to transmit coherent signals to the target. In addition to that, wireless frequency synchronization is essential for cooperative ranging, more importantly when an active repeater is used especially with different carrier frequencies for the reception and transmission of the ranging signals. Using differ- ent carrier frequencies mitigates multipath and self-induced interference. In this case, Frequency locked repeaters minimize phase and frequency errors, and this can be explained in the following. Let us consider the case where a baseband signal (ranging pulse) with a frequency fb is modulated on a carrier fc1 . This ranging pulse is transmitted from the radar and received at a repeater that ′ ′ downconverts it using the carrier fc1 . fc1 is nominally fc1 , and it is generally equal to fc1 if both the radar and repeater are frequency locked, otherwise it cannot be equal due to the relative frequency drift. If the general case is considered, the downconverted frequency on the secondary node repre- ′ sented by the repeater becomes fb + fc1 − fc1 . Similarly, this baseband frequency is upconverted ′ ′ again to fb + fc1 − fc1 + fc2 so it can be retransmitted back to the radar. Once received back by ′ ′ the radar, the downconverted frequency becomes fb + fc1 − fc1 + fc2 − fc2 . Ideally, if both nodes ′ ′ are frequency locked, with fc1 = fc1 and fc2 = fc2 , the recovered signal will have a frequency fb . Nevertheless, when this is not the case, the recovered signal will have an offset frequency that will prevent the radar from estimating the correct range of the target. In addition, even the same bands ′ ′ were used for transmission and reception, fc1 = fc2 and fc1 = fc2 , frequency synchronization is necessary as it significantly reduces the instantaneous phase and frequency errors, which leads to an improved ranging accuracy. The block diagram of the wireless frequency synchronization circuit is shown in Fig. 5.26. Detailed explanation and analysis of this circuit is presented in Chapter 4. Briefly, this adjunct circuit receives a two-tone CW signal from the primary node, with a tone separation equal to the desired reference input frequency, which was equal to 10 MHz for the used SDRs. The circuit 113 amplifies and splits the received signals and mix them back to generate a 10 MHz output. The 10 MHz output frequency is amplified further to reach the desired power, which for the Ettus USRP X310 SDR was between 0 and 15 dBm. Finally, the amplified signals are fed to the reference input to lock the frequency of the secondary node to the that of the primary node. As shown in Fig. 5.26, the outdoor adaptive ranging approach was achieved using three SDRs. Node A, which represents the primary node, was implementing the adaptive ranging algorithm in addition to transmitting the two-tone wireless frequency synchronization signals. Node B, a secondary node, was represented by an active repeater, connected to a self-mixing adjunct circuit, that allows it to lock its frequency to that of the primary node. All the carriers in this outdoor test were using the Industrial, Scientific, and Medical (ISM) frequency bands. The two tones of the wireless frequency synchronization signal were transmitted at 910 MHz and 920 MHz using one SDR. The signals were only composed of the two carrier frequencies, with no other modulation, to minimize any sort of noise in the extracted reference input frequency. These signals were fed to the frequency locking circuit that was equipped with a cavity-based band pass filter to ensure that no other signals than the two desired tones are being mixed together. The 10 MHz output of this adjunct self-mixing circuit was fed to the reference input of the repeater in order to ensure that both nodes are operating at the exact same frequencies. Once node B is locked to node A, the radar on node A starts transmitting its ranging signals at a 2.45 GHz carrier. The same adaptive framework that was used in Section 5.3.1.2, was used in this outdoor testing, and the adaptive approach is summarized in Fig. 5.27. Afterwards, node B receives these ranging signals, amplifies them, and retransmits them back to node A at a 5.8 GHz carrier. The antennas of the two-nodes were 90 m apart. The outdoor antennas can be seen in Fig. 5.28. Wide band grid antennas with an operational frequency from 600 MHz to 6.5 GHz were used, covering the entire desired bandwidth for this test. The antenna gain was 15 dBi to 26 dBi, depending on the frequency band, with 15 dBi corresponding to the lowest frequency band. All the antennas and the SDRs were equipped with surge protection devices. The hardware that was used for nodes A and B was placed indoors and is shown in Figs. 5.29 and 5.30. 114 Node A 5.8 𝐺𝐻𝑧 4.9-6.2 GHz Surge Protection REF IN TX1 5.8 𝐺𝐻𝑧 USRP RX1 X310 TX2 G = 40 dB REF OUT RX2 2.45 𝐺𝐻𝑧 10 MHz REF IN TX1 10 MHz RX1 0.92 𝐺𝐻𝑧 Surge Protection USRP 0.91 0.92 𝐺𝐻𝑧 2.45 𝐺𝐻𝑧 X310 TX2 REF OUT RX2 0.91 𝐺𝐻𝑧 Node B Surge Protection 5.8 𝐺𝐻𝑧 TX1 REF IN RX1 USRP 10 dB TX2 X310 2.45 𝐺𝐻𝑧 RX2 REF OUT 10 MHz G = 20 dB 10 dB G = 30 dB RF G = 40 dB 3 dB Surge Protection IF LO Frequency 7 dB 875 - 1010 DC - 11 MHz Locking Circuit MHz Figure 5.26: Block diagram for the outdoor adaptive ranging experiment, where wireless fre- quency synchronization was implemented. Node A is the primary node, while node B represents the secondary node. Wireless frequency synchronization two-tone CW signals was transmitted at 910 MHz and 920 MHz, and the ranging two-tone adaptive waveform was transmitted initially at 2.45 GHz carrier and it was received at 5.8 GHz carrier. From [13] © 2021 IEEE. The transmitted power was no more than 15 dBm for every carrier, and due to the cable losses, the maximum Effective Isotropic Radiated Power (EIRP) was less than 36 dBm. The major atten- uation resulted from the connection between the SDRs and the antennas. Since the attenuation in 115 Node A Node B TX RX PI Waveform Transmitter Controller Generator Active RX TX Repeater Ranging Range Standard Receiver Estimation Deviation Figure 5.27: The outdoor adaptive inter-node cooperative ranging architecture. The same approach as in Fig. 5.18 was used, with the addition of a wireless frequency locked repeater. From [10]. Node B RX Node A TX TX RX 90 m Figure 5.28: Each nodes was equipped with two antennas, one for transmit (TX) and one for receive (RX). The antennas on every node were separated by around 2 m, and the separation between the antennas of each node was 90 m. From [10]. general is not desirable, and for long distances, it is essential to ensure that a minimal attenuation is observed, the outdoor-rated LDF4-50A 1/2 cable was used to connect the antennas to the SDRs. Each connection was around 20 m. The attenuation is dependent on the type of the used cable and the transmitted frequency; for a 20 m LDF4-50A 1/2 cable, the average attenuation is as follows: 1.5 dB for the band 902 MHz to 928 MHz, 2.4 dB for the band 2.4 GHz to 2.5 GHz, and 4 dB 116 Figure 5.29: The hardware that was used to operate node A, including two SRDs. Figure 5.30: The hardware that was used to operate node B, including one SDR and the adjunct self-mixing circuit. for 5.725 GHz to 5.875 GHz. Similarly, the antenna gain is dependent on the frequency, and the estimated values were as follows: 16 dBi for the band 902 MHz to 928 MHz, 22 dBi for the band 2.4 GHz to 2.5 GHz, and 25 dBi for 5.725 GHz to 5.875 GHz. This ranging system was designed to enable monitoring the adaptive radar in varying weather conditions. The performance of the radar was affected by a multitude of uncertainties including varying SNR and interference levels, vibration, rain, among others. All these effects were assessed for the adaptive radar with the framework of Fig. 5.27. In this section the standard deviation was calculated using 200 consecutive range estimates. This calculated standard deviation was used as a feedback to the system. The transmitted baseband frequencies for the two-tone signal are 117 f1 = 20 kHz and 20 kHz ≤ f2 ≤ 7.52 MHz. f2 was determined by the output of the controller, and it allowed a bandwidth range between 0 and 7.5 MHz. The frequency of the disambiguation pulse was set to 1.875 MHz, equivalent to the fourth of the maximum possible bandwidth. The ranging pulse width and the disambiguation pulse width were both constant and equal to 143.7 µs and 533 ns respectively. A duration of 159.7µs was allocated for each of the two pulses, giving a total time of 319.4 µs. However, the ranging and disambiguation pulses were logged every 105 ms to allow continuous data logging and to prevent any buffer underflow or overflow. All the signals processing was performed real-time in LabVIEW. The targeted standard deviation was set to 10 mm; this value was selected since it allows the controller to operate smoothly in the assigned bandwidth, which was 7.5 MHz. Lower or higher target values can be selected depending on the application, however, this target value allows the reader to observe the full benefits of using the proposed adaptive ranging approach, since on average a 3.5 MHZ (half of the bandwidth) was needed to achieve a 10 mm standard deviation for the available SNR. Before calculating the standard deviation, 40 groups of 5 consecutive range estimates were assembled, the 5 estimates were averaged to obtain the final range estimate. This approach of averaging 5 consecutive estimates improves the standard deviation of the final estimate √ by a factor of 5. Once calibrated, the PI controller gain were Kp = 10µ and Ti = 3.3. Weather conditions were recorded using a Davis 6250 Vantage Vue weather sensor, The weather sensor was attached to the mast of the receive antenna of node A as shown in Fig. 5.31. This sen- sor was able to measure and log the rainfall, pressure, humidity, wind direction, wind speed, and temperature once every minute. Using the Davis 6250 Vantage Vue sensor allowed me to monitor and display some of the weather effects on the ranging performance. Evaluation of some of the external and weather effects on the ranging performance was done for 24 hours in total, and the data was captured from June 1, 2020 23:56 until June 2, 2020 23:56. 4090 processing intervals were present in the 24 hours interval. In the first outdoor experiment, the adaptive approach was not running, and ranging was done using fixed frequencies. f1 was set to 20 kHz, and f2 was set to 3.5 MHz. The ranging standard deviations during the 24 hours are 118 Figure 5.31: an image showing the weather sensor Davis 6250 Vantage Vue that was attached to the receive antenna of node A. shown in Fig. 5.32a. Although the ranging waveform was constant, it can be seen that the standard deviation was ranging from 0.005 m up to 0.05 m (the standard deviation was calculated based on an averaging of 5 range estimates) in a 24 hour period, and these extremes can vary even further depending on the external conditions. Generally, high wind speed increase the antennas vibrations, leading to a degraded performance. High humidity and rain rate increase the attenuation, leading to a decrease in the SNR. Moreover, nearby transmitters such as WIFI networks induce time-varying interference. With interference power varying based on the network consumption and the relative location of the surrounding transmitters. Although the effects of some of the mentioned factors can be estimated for a given time, other effects from factors such as interference cannot be estimated in advance. The Effects of SNR, wind speed, humidity, and rain rate are visible in 5.32. The drop in SNR from 550 to 1600 processing intervals had a visible negative effect on the standard deviation. High wind speeds at around 300 processing intervals lead to an increase in the ranging standard deviation, while a decrease in wind speed at values beyond 3000 processing intervals contributed to the decrease in ranging standard deviation. Similarly, high humidity and rain rate from 550 up to 1000 processing intervals lead to higher ranging standard deviations. 119 (a) (b) (c) Figure 5.32: (a) Ranging standard deviation for a 24 hours interval, where f1 = 20 kHz and f2 = 3.5 MHz. The effects of SNR changes and wind speed were shown in (b). Also the con- tribution of humidity and rain rate was presented in (c). Since in this dissertation we are interested in the beamforming gain, the ranging standard devi- ation results from 5.32a (treated as RMSE) were used to determine the coherent frequency that can be achieved for a desired beamforming gain in a two-node distributed array system. A coherent 120 Figure 5.33: Achievable coherent frequency using the obtained ranging standard deviation. Three probabilities for achieving coherent gain of at least 0.9 were investigated. The lines with solid colors represent the moving average with a window of 10, while the faded lines represent the individual estimates. gain of 0.9 was of interest, and the probabilities of 0.9, 0.8, and 0.7 to achieve this desired co- herent gain for a desired carrier frequency were evaluated. The clarity of the achievable coherent frequency for the desired coherent gain was improved by plotting the actual values of the 4090 individual estimates using faded colors and their smoothed values using clear solid colors. The smoothed values were obtained using a 10-point average window, and the results are shown in Fig. 5.33. It can be seen that for P (Gc > 0.9) = 0.9, the maximum coherent beamforming frequency that can be supported varied between 0.7 GHz and 2.7 GHz. This shows the advantage of using an adaptive ranging waveform that ensures a coherent transmission for a desired carrier frequency for all weather conditions with minimal bandwidth for the ranging signals. Once the achievable coherent frequencies were evaluated for a two-tone waveform with con- stant frequency, the adaptive ranging approach was tested for comparison. As mentioned earlier, the target ranging standard deviation was selected as 10 mm. The effects of weather conditions on the adaptive ranging approach were recorded for 24 hours, from June 3, 2020 03:05 until June 4, 2020 03:05. Fig. 5.34 show the same parameters as in Fig. 5.32, with the addition of the adap- tive upper frequency f2 of the adaptive ranging waveform. It can be seen that f2 was adaptively modified to counteract the environmental and external conditions and ensure a stable performance for the ranging. The variation in both SNR and wind speed had the highest effect on the f2 . The 121 (a) (b) (c) (d) Figure 5.34: (a) The adaptively modified second tone f2 of the transmitted ranging pulses. (b) Ranging standard deviation for a 24 hours interval, where f1 = 20 kHz and f2 was adaptively modified. The SNR changes and wind speed are shown in (c), and the humidity and rain rate are presented in (d). 122 Figure 5.35: Achievable coherent frequency using the obtained ranging standard deviation in the adaptive framework. Three probabilities for achieving coherent gain of at least 0.9 were investi- gated. The lines with solid colors represent the moving average with a window of 10, while the faded lines represent the individual estimates. achievable coherent beamforming frequency were reevaluated for the standard deviation values in Fig 5.34, and the results were displayed in Fig 5.35. It can be seen that the stability of the achiev- able coherent frequencies was considerably improved, this was possible due to the dynamic update of the waveform bandwidth. By setting the target ranging standard deviation to 10 mm, the max- imum achievable coherent frequencies probabilities of 0.9, 0.8, and 0.7 to achieve a 0.9 coherent gain were 1.5 GHz, 2.2 GHz, and 3 GHz. A longer evaluation of the adaptive framework was done with a period of 7 days, the data was captured from June 16, 2020 16:53 until June 23, 2020 16:53. Fig. 5.36 displayed the same parameters as in Fig. 5.34, with the only difference that in Fig. 5.36 the data was captured over a seven-day period. 28,630 processing intervals were captured in total. The achievable coherent frequencies over all this period are displayed in Fig. 5.37. 5.3.2 Using Only One Ranging Pulse as Feedback We have seen in Section 5.3.1 the framework for achieving adaptive ranging to maintain a target ranging accuracy in the event of any change in the environment. One of the drawbacks in the proposed approach could be the long time before a modification is applied to the transmitted band- width. This delay is due to the time needed to calculate the standard deviation, where in section 123 (a) (b) (c) (d) Figure 5.36: (a) The adaptively modified second tone f2 of the transmitted ranging pulses. (b) Ranging standard deviation for 1 week interval, where f1 = 20 kHz and f2 was adaptively mod- ified. The SNR changes and wind speed are shown in (c), and the humidity and rain rate are presented in (d). 5.3.1.3, 200 range estimates were used to estimate the standard deviation. The delay that is intro- duced by waiting for 200 range estimates before applying any change to the waveform is usually acceptable, since the required processing time before an update is on the order of milliseconds. However, speeding up this task can be beneficial, especially in fast changing environments. This can be done by modifying the perception process to reduce the time required before the controller 124 Figure 5.37: Achievable coherent frequency using the obtained ranging standard deviation in the adaptive framework. Three probabilities for achieving coherent gain of at least 0.9 were investi- gated. The lines with solid colors represent the moving average with a window of 10, while the faded lines represent the individual estimates. can determine the new update to the waveform. The modification can be done by selecting a metric other than the standard deviation of the range estimates to serve as feedback to the controller. In this section we implement an adaptive ranging framework that relies on the SNR of the received pulses rather than the standard deviation. Based on the observed SNR, the second tone of the rang- ing waveform is modified to maintain a desired ranging accuracy while minimizing the waveform bandwidth. On one hand, this approach is fast and can account for any drop in SNR, on the other hand, the presented approach cannot act against wind induced degradation, since it cannot detect the effect of vibration. 5.3.2.1 SNR estimation from One Pulse Accurate SNR estimation is essential for the proposed adaptive ranging algorithm and plays a significant role in wireless system generally. SNR is used to characterize wireless systems, signal processing algorithm, and communications performance, among others. SNR can be used for decision making, as it is intended to be used in this work or other works such as in [140], where it was used to adaptively modify the modulation of Quadrature Amplitude Modulation (QAM) signals. SNR can generally be used to estimate the bit-error-rate (BER), or the sensing accuracy. Various techniques can be used to estimate the SNR of a given signal. These techniques include split-symbol moments estimator [141], maximum-likelihood estimator [142], squared signal-to- noise variance estimator [143], second- and fourth-order moments [144], and signal-to-variation 125 ratio [145], among others. A comparison of these SNR estimators is shown in [146]. For blind SNR estimation, eigenvalue decomposition [147] can be used. Although this approach is applicable for most of the cases, it suffers from long processing time due to its covariance matrix calculation and it can be applicable only when multiple pulses are captured. These impediments makes this estimator less desirable, especially that in this work we are interested in fast SNR estimation. SNR estimation using eigenvalue decomposition is summarized in the following. Eigenvalue decomposition-based SNR estimator can estimate the power of the signal and noise without prior knowledge of the received waveform, making it a perfect candidate for SNR estima- tion for RF signals. In essence, this algorithm starts by forming a matrix X from L received pulses with N samples each. X is represented by    χ1,1 χ1,2 . . . χ1,L     χ2,1 χ2,2 . . . χ2,L  X=  . . (5.62)    .. .. .. . ..   . .    χN,1 χN,2 . . . χN,L The covariance matrix, which is the most time consuming operation for this estimator, is com- puted from 1 Rx = XXH (5.63) N where XH is the Hermitian of the matrix X. Singular-value decomposition is used to estimate the eigenvalues λL of the matrix Rx . Once obtained, the eigenvalues are sorted by decreasing order. Since the desired signal, which is represented by L pulses, is correlated, the highest eigenvalue λ1 typically represents the power of the received signal. In some cases, for complex waveforms, λ2 , and/or λ3 , etc., can represent part of the received pulse. Assuming that only λ1 represents the power of the received pulse, λ2 − λL represent the power of the noise. The noise level can be estimated using L 2 1 X γ = λl . (5.64) L − 1 l=2 126 The signal power level is calculated using λ1 − γ 2 Ps = . (5.65) L In the case where the transmitted signal is a continuous wave or it does not have any non-zero parts, the SNR is calculated by   Ps SNRdB = 10 log10 . (5.66) γ2 For pulsed signals, which is the case in the presented work, (5.66) can be modified to accommodate for the used pulse width by     Ps PRI SNRdB = 10 log10 + 10 log10 . (5.67) γ2 T This way, the power of the noise will be only considered in the area where the captured signal contains the ranging pulse. 5.3.2.2 Ranging Standard Deviation Estimation from SNR The proposed SNR estimation approach is based on comparing the matched filter output of the received ranging pulse to that of the transmitted pulse. This step allows us ot determine the signal power level. As for the noise level, it is estimated by observing the received signals between every two ranging pulses. By estimating the signals level in these regions, the noise or interference levels can be estimated. Once both the signal RMS amplitude and the noise RMS amplitude are obtained, SNR can be easily estimated. The advantage of this approach is that it is computationally inexpen- sive, since most of the computation, which is represented by computing the matched filter outputs, can either be done before the ranging algorithm starts running in real-time (for the case of matched filter output of the transmitted pulse or autocorrelation), or is already done when estimating the range (for the case of matched filter output of the received pulse). This SNR estimator can only be applicable if the signal of interest is already known, which is the case for radars. To visualize the presented algorithm, let us consider that we have a transmitted signal SRX with f1 = 10 kHz and f2 = 1.01 MHz as represented in Fig. 5.38. The received signal ST X will have a 127 Figure 5.38: An example of transmitted (orange) and received (blue) two-tone signal with 11 dB SNR. From [12] © 2021 IEEE. smaller amplitude with added noise, which we assume in this section that is AWGN. The matched filter output of the received pulse can be evaluated using X∞ Sout [m] = h[n − m]SRX [m], (5.68) m=−∞ where h[n] = ST∗ X [−n] is the linear filter that maximizes the SNR of the received signal. The auto- correlation is computed similarly by replacing SRX [m] with ST X [m] in (5.68). The autocorrelation can be estimated before the ranging starts, since we will be defining the transmitted waveform, which reduces the computational time. As for (5.68), the matched filter output is obtained when performing the time-delay estimation, thus, similarly to the autocorrelation step, no extra compu- tation is needed. When computing the autocorrelation and cross-correlation, the only parameters needed for the SNR estimation is the magnitude of their highest peak. The autocorrelation and cross-correlation of the signals in Fig. 5.38 are shown in Fig. 5.39. Once (5.68) is performed, the time delay is estimated for the ranging step, thus, the area that only contains the noise components can be determined using the knowledge of the pulse width and PRI. Smaller noise area can be selected to account for any time-delay estimation error. Once the area that only contains noise is determined, the RMS noise amplitude AN is directly obtained. The SNR can be calculated once AS is determined, and this can be done by comparing the peak of the matched filter output with the peak of the autocorrelation of the transmitted signal. 128 Figure 5.39: A simulated example of the autocorrelation and cross-correlation of the pulsed two- tone signals that are shown in Fig. 5.38, where the ratio of amplitudes of the received to transmitted pulses is K = 0.25. From [12] © 2021 IEEE. Generally, a simplified representation of the received signal can be obtained using SRX [n] = K · ST X [n] + w[n], (5.69) where K is a scaling factor ranging from 0 to 1, and w represents the AWGN noise in the received signal. Combining (5.68) and (5.69) gives X∞ X ∞ Sout [m] = K h[n − m]ST X [m] + h[n − m]w[m], (5.70) m=−∞ m=−∞ where X∞ X ∞ h[n − m]ST X [m] = ST∗ X [m − n]ST X [m] (5.71) m=−∞ m=−∞ is the autocorrelation of the transmitted pulse. The term X∞ h[n − m]w[m] (5.72) k=−∞ is the cross-correlation of the transmitted pulse with the received noise. Since both terms are uncorrelated, ∞ P m=−∞ h[n − m]w[m] tends to zero for long pulses with sufficient integration time or after averaging multiple captured pulses, yielding X∞ Soutavg [m] = K ST∗ X [m − n]ST X [m]. (5.73) m=−∞ 129 Figure 5.40: Estimation accuracy of the eigenvalue decomposition-based SNR estimator. Three sampling rates were selected to evaluate the change in accuracy. From [12] © 2021 IEEE. P∞ Once both Soutavg and m=−∞ ST∗ X [m − n]ST X [m] are calculated, K can be estimated by com- paring the magnitude of both peaks, where K is the ratio of peaks. After determining K, AS is computed from v u u1 X N AS = K t (ST X [n])2 . (5.74) N n=0 The SNR estimate is thus given by  2 ! AS SNRdB = 10 log10 . (5.75) AN The accuracy of this SNR estimator is dependent on the ability to estimate accurately the peak magnitude, which is possible if the sampling rate is high enough. Moreover, the estimation accu- racy of the peak can be further improved by utilizing interpolation, sinc fitting, zero crossing of Hilbert transform, and FFT pruning [148], among others. These are the same techniques that can be used to estimate the refined peak location, which were discussed in Section 5.2. The SNR estimation accuracy of the proposed approach (using the matched filter output) was compared to that of the eigenvalue decomposition approach through simulation. The transmitted 130 Figure 5.41: Estimation accuracy of the matched filter-based SNR estimator. One of the two tones had a ±90◦ (same results for both cases) phase shift in comparison to the filter; which leads to the lowest accuracy since only the in-phase signal is being considered. Three sampling rates were selected to evaluate the change in accuracy. From [12] © 2021 IEEE. signals were based on a two-tone waveform with 1.2 ms pulse width, f1 = 10 kHz, and f2 = 1.01 MHz. 100 captures with 5 ms duration each were processed for every case. The SNR was estimated for the case where only the in-phase part of the signal was captured. By considering only the in-phase part of the signal, the initial phase of the received pulse plays a big role in the achievable accuracy for the proposed method. In the case where we are matched filtering the complex signal with both in-phase and quadrature parts, the received phase of a two-tone pulse hardly affects the peak amplitude of the matched filter output. Fig. 5.40 shows the performance of the eigenvalue decomposition-based SNR estimator, where as Figs. 5.41 and 5.42 show the performance of the matched filter-based SNR estimator. In Fig. 5.42, the received phase was set equal to the transmitted phase, which gives the highest accuracy, whereas in Fig. 5.41, the received phase of the lowest tones was shifted by ±90◦ , giving the lowest possible estimation accuracy. Sampling rates of 2.5 MSps, 25 MSps, and 250 MSps were evaluated. The error of both estimation 131 Figure 5.42: Estimation accuracy of the matched filter-based SNR estimator. The initial phases of both the signal and filter were equal, leading to the highest accuracy. Three sampling rates were selected to evaluate the change in accuracy. From [12] © 2021 IEEE. algorithms was calculated using E = |Actual SNR - Estimated SNR|, where both the actual and estimate values were in dB. The performance of the eigenvalue decomposition approach is superior at 2.5 MSps with an approximate of 10 dB improvement over the matched filter SNR estimation approach. However, when the phases are aligned, as in Fig. 5.42, the matched filter approach gives a much higher accuracy than the eigenvalue decomposition approach for the 250 MSps case, especially at high SNR values, where more than 10 dB improvement was observed. This behavior is expected, since at lower sampling rate, the peak of the matched filer output is discretized, and often a difference between the magnitude of the actual peak and the magnitude of the discretized point closest to the actual peak is observed. Nevertheless, by employing the peak estimators that were presented in Section 5.2, a refined estimate of the main lobe is obtained, leading to accuracies similar to what was observed for the 250 MSps case, even with low sampling rates. Moreover, the cooperative ranging approach operates typically in relatively high SNR regions, and in this case the SNR 132 Figure 5.43: Experimental setup for testing the proposed SNR estimation method that relies on the matched filter output of the received pulses. The transmitter was connected to the receiver via 2 m coaxial cable with an additional 30 dB attenuator to ensure that the received power is within the acceptable range. Figure 5.44: Measured SNR estimates using the matched filtering approach for a single pulse and 20 sequential pulses in a SDR-based system. The SNR was controlled by modifying the transmitted power. From [12] © 2021 IEEE. estimation errors for both algorithms is very low, and can be ignored. The matched filter-based SNR estimator was tested experimentally using the setup in Fig. 5.43. An Ettus USRP X310 SDR with two UBX 160 daughterboards was used. The transmitting port and the receiving port were both on separate daughterboards to minimize the interference. A 2 m cable was used to connect the two ports and to ensure that the SNR is being tested in a controlled environment where it is possible to obtain the actual transmitted power (within a small error margin). A 30 dB attenuator was added to ensure that the receive power does not exceed the 133 maximum power threshold. The noise power was held constant throughout the experiment and the transmitter power was adjusted electronically to modify the SNR. Less than 2 dB error was observed at SNR values above -30 dB as shown in Fig. 5.44. Using 1 pulse to estimate the SNR yielded good estimates with no difference in comparison to using aver- aging of SNR values from 20 pulses once the actual SNR was beyond -5 dB. Generally, averaging √ the SNR estimate of 20 pulses improves the standard deviation of the estimates by a factor of 20. This improved SNR estimation accuracy over any other estimator is due to the knowledge of the received signals, since the pulses are used for the radar. The processing gain fro the matched filter was 31 dB, which explains the ability of this SNR estimator to estimate accurately SNR values down to -31 dB. The 0.25 dB constant error from -20 dB up to 26 dB is due to the instability in noise power of the system. 5.3.2.3 SNR-Based Perception Design As mentioned earlier, the aim of this section is to minimize the processing time in the perception step that was introduced initially in Fig. 5.18. This can be done by utilizing minimal information to deliver a feedback to the controller, which in this section is achieved by using the SNR of the received signal instead of the standard deviation of the range estimates. Consequently, the new block diagram representing the adaptive framework is given in Fig. 5.45. However, the ranging accuracy still needs to be the deciding factor in the adaptive loop, but instead of extracting it from consecutive range estimates, the SNR estimates are used to estimate it. This procedure reduces considerably the processing time, since a feedback can be given to the controller once one ranging pulse is received, instead of waiting for tens or hundreds of pulses before an action can be made. The ranging standard deviation is estimated using the estimated SNR from one pulse as ex- plained in Section 5.3.2.1. The estimated SNR is fed to a range accuracy estimator block that once calibrated, only relies on the estimated SNR. The ranging accuracy estimator is based on the CRLB that is defined in (5.16) for moderate to high SNR. The preprocessing SNR is embedded in SNR 134 Control Action TX Waveform PI Controller Transmitter Generator Perception Environment Range Accuracy SNR Estimator Estimation RX Receiver Distributed Delay Beamforming Estimation Processing Figure 5.45: Adaptive ranging is necessary when high ranging accuracy and low spectral footprint are needed. This can be done by implementing three processes that ensures that only the desired bandwidth is used. A perception process is used to estimate the SNR of the received ranging pulses and convert this SNR to an estimated standard deviation. The controller compares the estimated standard deviation to a desired value and sends the necessary feedback to the action block that generates the desired waveform. This generated waveform is used for ranging, until a new waveform is generated for the next loop. From [12] © 2021 IEEE. which is defined in this dissertation as 2E SNR = = 2 T BWr SNR. (5.76) N0 In the majority of the cases the variance (and standard deviation consequently) of the time-delay estimates cannot reach the CRLB, due to quantization errors, noise, nonidealities of the system, distortion, the use of repeaters, among other factors. However, the variance of the time-delay estimates is inversely proportional to the SNR and it can be predicted using the SNR estimates by adding a scaling factor to (5.16), so that the CRLB curve follows the actual variance of the system. The scaling factor is obtained through initial calibration of the system, where the performance of ranging in the desired system is evaluated over a variety of SNR values. The scaling factor that is applied to the CRLB curve so that it follows the observed variance is determined using an experimental setup similar to Fig. 5.43. The scaled CRLB curve is compared to the calculated variance for multiple SNR values as shown in Fig. 5.46. As can be seen, the 135 Figure 5.46: Measured standard deviation of the time delay estimator versus the scaled CRLB curve, demonstrating that the CRB from (5.16) can be scaled using a constant parameter to match the observed standard deviation. This experiment was done where no vibration or interference were present. From [12] © 2021 IEEE. (s) Figure 5.47: Measured RMSE of ranging accuracy versus SNR (averaged over 100 measurements) showing only a 3.7 ps improvement when more than one pulse is used. From [12] © 2021 IEEE. two curves track with minimal errors for SNR values larger than 6 dB, which is going to be the case for cooperative ranging systems. The approximation is extendable to even lower values once the time-bandwidth product is increase by modifying the sampling rate or the pulse width. This scaled CRLB curve is used later to estimate the time delay variance/standard deviation using only the SNR estimate of one ranging pulse. The effect of using only one pulse versus multiple pulses to estimate the SNR (using the pro- 136 Figure 5.48: Experimental setup for the adaptive ranging system with the perception process that relies only on the estimated SNR. The two horn antennas were used for transmission and reception, and the corner reflector represented the target. From [12] © 2021 IEEE. posed algorithm) on the estimated standard deviation for the time-delay estimates is analyzed in Fig. 5.47. As can be seen a minimal improvement is observed when calculating the RMSE be- tween the adjusted CRLB curve and the actual standard deviation for varying number of pulses. The difference in RMSE between using 1 or 40 pulses to estimate the SNR was only 3.7 ps which is relatively negligible. This shows the robustness of this proposed modification to the perception process. 5.3.2.4 Adaptive Ranging Results The proposed adaptive ranging algorithm using the modified perception process was tested exper- imentally using the Ettus USRP X310 SDR with UBX 160 daughterboards. Two horn antennas with 15 dBi gain and operational range from 3.85 GHz to 5.85 GHz were used for transmission and reception as shown in Fig. 5.48. A 21.5 dB low noise amplifier (LNA) was connected on the receive port. The target was represented by a corner reflector placed 4.5 m away from the antennas. The pulse width was set to 1.2 ms and the maximum allowed tone separation was set to 3 MHz. The lower tone was transmitted at a 10 kHz baseband. The sampling rate was set to 10 MSps. The target ranging standard deviation was set to 5 mm in this experiment, which corresponds to a time- 137 (a) (b) (c) Figure 5.49: Performance of the adaptive ranging approach with modified perception: (a) ranging performance; (b) tone separation; (c) SNR. The SNR was modified by changing the transmitted power. The adaptive system was able to keep up with the change in SNR until a higher bandwidth than what is available was needed. From [12] © 2021 IEEE. delay standard deviation of 33.3 ps. This value was solely chosen so that it can show clearly the operation of the proposed adaptive ranging approach. Similarly to Section 5.3.1.2, the parameters of the PI controller were tuned using the Ziegler-Nichols tuning method. The gain parameters for 138 this architecture were as follows: Kp = 10, 125 and Ti = 4.25. The transmitted power was adjusted electronically every 10 processing intervals (processing loop cycles) to evaluate the performance of the controller. As can be seen in Fig. 5.49c, the SNR initially was 40 dB, and it was reduced constantly until reaching an SNR close to 4 dB. Fig. 5.49b shows all the assigned bandwidths by the controller to counteract the drop in SNR. The adaptive ranging performance was displayed in Fig. 5.49a, where it was compared to the target value, which was 5 mm. As expected, the bandwidth kept on increasing with every decrease in SNR until reaching a point where the adaptive system cannot keep up with the decrease in SNR. This point, which is shown after the 30th processing interval, is reached when the desired bandwidth exceeds the allowed value, which was set to 3 MHz. Between 60 and 70 processing intervals, the SNR reached a point where it is not possible for the matched filter to always detect the main lobe, as a result, the standard deviation was alternating between low and high values. This can be fixed by using the disambiguation pulse as explained in Section 5.2. The same behavior was observed in Fig. 5.46, but at a different SNR threshold value. This is the case because a different waveform was used (different bandwidth). With the results in Fig. 5.49, it is possible to conclude that the proposed adaptive ranging approach is feasible for the SNR values that can be attained in cooperative systems. Furthermore, real-time implementation is feasible, especially when the standard deviation is estimated using only one ranging pulse. Thus, the limiting factor for the controller processing time is in this case the PRI of the ranging pulse, which is the best case scenario in therms of processing time. 139 CHAPTER 6 BEAMFORMING EXPERIMENTS Coherent beamforming using centralized open-loop coherent distributed arrays is possible once the electrical states of the secondary nodes are synchronized to that of the primary node. In this dissertation, I am investigating the coherent beamforming of CW signals, which alleviates the need to synchronize the time on the distributed nodes. Thus, only the frequency and phase need to be synchronized. Frequency synchronization is done using the proposed adjunct self-mixing circuit in Chapter 4. As for phase alignment, in this chapter I consider the cases where the angle of the primary node relatively to a secondary node is given, and only the distance needs to be estimated in order to determine the desired correction to the relative phase shift on a given secondary node. 6.1 Effects of the Displacement of Nodes on the Beamforming Performance As seen in Chapter 4, any change in distance between the synchronization antennas induces a relative phase shift on the secondary nodes due to the frequency locking circuit, which can be estimated using (4.19). In addition, a second phase shift ∆ϕc2 is obtained due to the change in distance between the coherent transmitters of both nodes. This observed phase shift on the secondary nodes is estimated from 2π∆dT sin(θ0 ) ∆ϕc2 = − , (6.1) λc where ∆dT is the displacement distance separating the transmitter of the primary node and the transmitter of the secondary node of interest, and θ0 represents the beamsteering angle. Thus, the total phase shift on the secondary node due to the relative displacement is estimated using ∆ϕc = ∆ϕc1 + ∆ϕc2 . (6.2) Once this phase shift is estimated, it can be subtracted from the transmitted phase of the secondary node of interest to align the phases of the nodes in the array. 140 Target Primary Node Secondary Node T1 M1 S1 Moving M2 10 MHz S2 Node REF IN TX1 REF IN TX1 RX1 RX1 USRP X310 USRP X310 TX2 TX2 REF OUT RX2 REF OUT RX2 10 MHz 10 MHz REF IN TX1 Frequency RX1 Locking USRP X310 TX2 Circuit REF OUT RX2 Figure 6.1: Block diagram of the setup used to measure the performance of the distributed two- node beamforming experiment. From [14] © 2021 IEEE. As seen from (4.19) and (6.1), or (6.2), the phase shift caused by the change in distance of separation between the array is maximized when θ0 = 90◦ , and is equal to 0 when θ0 = 270◦ . This is the case because at θ0 = 90◦ , sin(θ0 ) = 1, leading to ∆ϕc2 = ∆ϕc1 (assuming that ∆dIN = ∆dT ). On the other hand, when θ0 = 270◦ , sin(θ0 ) = −1, leading to ∆ϕc2 = −∆ϕc1 which makes both phase shifts add destructively (assuming that ∆dIN = ∆dT ). Thus, the most challenging scenario is when θ0 = 90◦ , which is the case that is being studied in this chapter. The phase shifts ∆ϕc1 , ∆ϕc2 , and ∆ϕc were tested using three experiments. The block diagram representing the used hardware and connections is shown in Fig. 6.1, while the experimental setup is shown in Fig. 6.2. In the first experiment, only the frequency synchronization antenna, which is represented by the antenna M2 was moved to test the effect of ∆ϕc1 on the beamforming performance or coherent gain. Afterwards, only the coherent transmitter, represented by M1, was displaced to evaluate the effect of ∆ϕc2 on the coherent gain. Finally, the entire primary node (antennas M1 and M2) was displaced in the third experiment to show the effect of both of the phase shifts, which are represented by ∆ϕc . The antennas M1 and S1 from the primary and secondary nodes respectively were used for the transmission of the coherent signals. The coherent signals were captured using T1, which was fed to an Ettus USRP X310 SDR. Antennas M1, S1, and T1 were log-periodic antennas with a frequency range of 1.35 GHz to 9.5 GHz. Standard Gain horn antennas with frequency range 141 Target Primary Secondary Figure 6.2: Image of the experimental setup in the semi-enclosed arch range showing the dis- tributed two-node beamforming experiment. From [14] © 2021 IEEE. 3.85 GHz to 5.85 GHz were used for frequency synchronization, these antennas were represented by M2 and S2 in Fig. 6.1. In the three conducted experiments, the frequencies were selected as follows: f1,ref = 3.995 GHz, f2,ref = 4.005 GHz, and fc = 1.5 GHz. The signals transmitted at these frequencies were all CW, and a 100 kHz CW baseband signal was modulated onto the carrier fc , which was later recovered by the receiver to evaluate the beamforming performance. The beamformed signals at 1.5 GHz had a wavelength λc = 0.2 m, thus, the antennas on the primary node were only moved by 0.2 m in total to observe the full effect of wireless frequency synchronization on the beamforming performance. In the experiment that is depicted in Fig. 6.2, the secondary node was placed 5 m away from the target, while the primary node was placed in between the two nodes with an initial separation of 1.7 m from the secondary node. The antennas of the primary node were moved by a total of 0.2 m 142 Figure 6.3: Beamformed signals from two transmitting nodes at a 1.5 GHz carrier with a 100 kHz CW baseband signal. The secondary node was wirelessy frequency synchronized to the primary node using the adjunct self-mixing circuit, and the received phases were manually adjusted to achieve the highest possible coherent gain at the receiver. towards the target. The carrier was transmitted in these experiments with a 17 dBm output power and the beamforming performance was captured for every 2.5 cm increments. At first, the phases of the two received signals from the primary and secondary nodes were calibrated manually to perfectly align the phases at the starting positions. Once the phases are aligned, the wireless fre- quency synchronization approach allowed the received signals to add up coherently. A capture of the received signal is shown in Fig. 6.3. As can be seen, the amplitude of the received signals has some ripples and it is not constant, this is due to the instantaneous phase and frequency errors due to the wireless frequency synchronization. As explained in Chapter 3, during the synchronization of the secondary node, the phase noise of the locked VCO is affected by the phase noise of the received synchronization signals, the PLL phase noise, and the phase noise of the VCO. The com- bination of all this noise produce some instantaneous phase and frequency errors which are mainly dependent on the quality of the received synchronization signals and the used hardware. A better performance can be obtained if VCOs with higher phase stability were used. 143 Figure 6.4: Experimental results showing the coherent summation of the primary and secondary node signals for the case where the two-tone transmitter M2 was displaced. The transmitted signals were connected to the oscilloscope via cable and the data was recorded for different displacement distances. From [14] © 2021 IEEE. 6.1.1 Frequency Locking Circuit Receiver Displacement The first experiment aimed to prove that by using (4.19) it is possible to estimate the phase shift produced by the displacement of the frequency synchronization antennas. Thus, only the antennas S2 and M2 were displaced. The antennas T1, S1, and M1 were replaced by coaxial cable to ensure that when the primary node is moved, only the phase shift produced by the wireless frequency synchronization process is observed. M2 was moved towards the target over a total distance of 20 cm which is equal to the wavelength of the coherent signal. The beamforming performance is shown in Fig. 6.4. In this experiment, the coherent signals from the primary and secondary nodes had a similar amplitude, which stayed constant throughout the experiment since the transmitters of these two nodes were replaced by coaxial cables. A 6 dB increase in power was observed when the phases were perfectly aligned. The phase shift caused by the motion of the primary node was estimated using (4.19) and the resultant phase shift was deducted from the transmitted phase of 144 Figure 6.5: Coherent beamforming results showing the combined signal power of the two trans- mitted signals for multiple displacement distances of the primary node transmitter M1. From [14] © 2021 IEEE. the secondary node to correct for the anticipated phase shift. However, when no correction was performed, the performance degraded considerably, and a 10.5 dB decrease in power was observed in the area around 0.1 m separation, which corresponds to the distance where the phase shift is close to 180◦ . However, the minimum power was observed at 12.5 cm displacement distance instead of 10 cm, which is a result of multipath interference. The effects of multipath interference are also visible in the case of the updated secondary phase (where the correction was done), where a degradation of the received power was observed at 10 cm. 6.1.2 Primary Transmitter Displacement Opposite to Section 6.1.1, the phase shift produced by moving the transmitters is evaluated in this section. This was done by using log-periodic antennas for T1, S1, and M1. The phase shift produced by the motion of the synchronization antennas was suppressed by keeping S2 and M2 stationary while the M1 was displaced. The array was transmitting in the end-fire configuration, 145 Figure 6.6: Coherent beamforming results showing the combined signal power of the two trans- mitted signals for multiple displacement distances of the primary node. From [14] © 2021 IEEE. which is done by selecting θ0 = 90◦ . The results of this experiment are shown in Fig. 6.5. As can be seen, the coherent gain was kept above 0.9 for most of the positions in the case where the phase was adjusted using (6.1). These results show that it is possible to track the phase shift produced by moving the transmitters using the conventional analysis for coherent distributed arrays in the case where the adjunct self-mixing circuit was used for frequency synchronization. 6.1.3 Full Node Displacement Now that the phase shift produced by moving individual antennas on the primary node was evalu- ated, this section shows the effect of moving all the antennas on the primary node simultaneously. Similar to Sections 6.1.1 and 6.1.2, in this section the antennas of the primary node were moved by a total distance of 0.2 m, which corresponds to the wavelength of the transmitted signal. Beam- forming was achieved at the beamsteering angle θ0 = 90◦ , which corresponds to the most sensitive case scenario. The estimated phase shift from (6.2) was used to correct the phase on the secondary 146 node so that it is possible to obtain a coherent summation of signals at the target. As can be seen from Fig. 6.6, when the phase correction was applied, it was possible to maintain a coherent gain above 0.9 for all the recorded values. Since in this experiment both of the antennas of the pri- mary node were displaced simultaneously, and the antennas were beamforming at θ0 = 90◦ , two minimas were obtained for the case where no correction was applied for the phase of the sec- ondary node. This is the case since the contribution from both terms in (6.2) was additive. This experiment proves that coherent beamforming is achievable using the presented approach once the primary node is localized by the secondary node. In this experiment, localization is achieved by only measuring the separation between the two nodes since the angle is given. 6.2 Coherent Beamforming using Frequency Locking Circuit and Ranging In addition to the experimental setup in Section 6.1, the secondary node is equipped with a radar in this section so that the location of the primary node can be estimated in real time before applying any phase correction to the transmitted phase from the secondary node. The primary node is equipped with a repeater so that it can be seen by the secondary node as a point source. This is essential since accurate range estimates are required to estimate and compensate for the obtained phase shift, especially that two-tone waveforms (which are used for ranging in this section) are highly ambiguous. 6.2.1 Range Estimation the distance between the two nodes was estimated using the two-tone ranging waveform. The sampling frequency was set to 25 MSps, while the ranging frequencies were selected as f1 = 500 kHz and f2 = 4.5 MHz and the pulse repetition interval had a duration of 1 ms with a 50% duty cycle. It should be noted that with a sampling rate of 25 MSps a larger f2 can be chosen which would improve the ranging standard deviation. However, it was found that with upper tones close to the Nyquist frequency, distortion in the discrete matched filter output was too coarse to accurately detect small changes in position. By reducing the value of f2 , the estimation ability is 147 (a) (b) Figure 6.7: (a) Time domain representation of the localization waveform. (b) Frequency content of the localization waveform. reduced but results in a more finely sampled representation of the matched filter and therefore the bias is reduced. The time and frequency domain of the ranging waveform is shown in Fig. 6.7. The processing gain from using a matched filter is equivalent to the time-bandwidth product T BWn , where T is equal to 0.5 ms in this experiment, and BWn is the noise bandwidth which, since no filtering was preformed, was 25 MHz. The processing gain is equivalent to 41 dB. The average preprocessing SNR was on average equal to 30 dB, giving a total post processing SNR of 71 dB. Using these values, the CRLB of the time delay estimates is equal to σt = 9.88510−23 s2 , equivalent to a lower bound for the positional RMSE of σR = 1.5 mm. 148 𝑦 Primary Node 𝑑𝑇 2 𝜃 Frequency 𝑥 𝑑𝐼𝑁 Synchronization & Localization 𝑑𝑇 Signals 2 Secondary Node Figure 6.8: Schematic of the distributed array used in this study, consisting of a primary and secondary node, beamforming to an arbitrary angle. To evaluate the required localization accuracy, the coherent gain Gc is evaluated parametrically as a function of the localization standard deviation. The probability to achieve a coherent gain above a certain threshold P (Gc ≥ X) is evaluated using 50,000 Monte Carlo simulations where the random distances ∆dIN and ∆dT are equal, which is the case portrayed in Fig. 6.8. Similar settings to the Monte Carlo simulations in Chapter 2 were selected. Fig. 6.9 shows the simulated results where the standard deviation of the range estimates, σ∆d , is expressed as a function of the wavelength of the coherent signal. The threshold values X = 0.6, 0.7, 0.8, and 0.9 were investi- gated for a probabilities P (Gc ≥ X) = 90%. As can be seen, and as discussed earlier in this chapter, higher ranging accuracy is needed at θ0 = 90◦ to prevent the coherent gain from considerably degrading. The minimum RMSE requirement for the range estimate to achieve P (Gc ≥ 0.9) = 90% at θ0 = 90◦ is equal to 0.03 λc . Knowing that, the achievable coherent frequencies using these requirements are given by 0.03 c fc ≤ . (6.3) σx 149 Figure 6.9: 50,000 Monte Carlo simulation showing various threshold values for achieving co- herent gain. The probabilities were estimated as a function of angle and localization standard deviation. The strictest requirements for localization uncertainty appear at 90◦ . Primary Node Secondary Node 𝑓𝑅2 Receiver 𝑓𝑐 M1 S1 T1 M2 S2 10 MHz M3 𝑓𝑅1 𝑓𝑟1 𝑓𝑟2 Moving S3 Node 𝑑𝑇 REF IN TX1 REF IN TX1 RX1 RX1 USRP X310 USRP X310 10 MHz TX2 10 MHz TX2 Dig. Scope REF OUT RX2 REF OUT RX2 REF IN TX1 REF IN TX1 RX1 RX1 USRP X310 USRP X310 TX2 TX2 REF OUT RX2 G = 40 dB LO G = 40 dB REF OUT RX2 IF Frequency REF IN TX1 RX1 DC - 11 MHz RF Locking USRP X310 TX2 Circuit REF OUT RX2 10 dB Figure 6.10: Schematics showing the connections and setup of the distributed nodes and target. This setup was used for the wireless beamforming experiment. From [15] © 2021 IEEE. In this section, the coherent carrier frequency is selected as 1.5 GHz. Thus, to ensure that P (Gc ≥ 0.9) = 90%, the maximum allowed RMSE is 6 mm, which is higher than the CRLB of the selected rang- ing waveform and the achievable SNR in the proposed experiment (which is on average equal to 30 dB). 150 T1 Receiver M2 Primary M1 M3 Secondary 𝜃 S1 S2 S3 Figure 6.11: Open-loop coherent distributed array setup showing the target, primary, and secondary nodes. The antennas in this setup were connected to multiple SDRs, an Oscillator, and a frequency locking circuit. From [15] © 2021 IEEE. 6.2.2 Distributed Beamforming Experiment To demonstrate the proposed approach, the setup described by the block diagram in Fig. 6.10 was realized. The antennas that were representing the two-node array are shown in Fig. 6.11. As can be seen, the steering angle θ0 was set to 90◦ . The experiment was held in a semi-enclosed arch range to minimize the unwanted reflections. Horn antennas were used for frequency and phase synchronization while log-periodic antennas were employed for the transmission of the coherent signals. M1 and S1 had an operational frequency range of 2–18 GHz, while M3 and S3 had an operational frequency range of 3.95–5.85 GHz. The log-periodic antennas, represented by M2 151 and S2 had an operational frequency range of 1.35–9.5 GHz. The receiver was equipped with the horn antenna T1, which had an operational frequency of 0.5–6 GHz. The MSOX92004A Infiniium oscilloscope represented the target, which was used to monitor the received power for various scenarios. The primary node was equipped with an active repeater that receives the ranging signals from the cooperative radar, amplifies them, and retransmits them back at a second band. The use of this repeater allows the primary node to be represented as a point source, which is essential for this experiment since an accurate range measurement is necessary to eliminate the phase shifts in the array. By transmitting back the ranging signals at a second band, it is ensured that the secondary node will be only observing the primary node. In addition, the power amplification on this repeater reduces the propagation losses to 1/R2 instead of 1/R4 as seen in traditional radars. The two carriers fR1 and fR2 that were used for ranging are at 5 GHz and 3 GHz respectively. At first, the ranging pulses were transmitted from S1 at a 3 GHz carrier. Once captured by M1, they were amplified and retransmitted back from M3 at a 5 GHz carrier. Finally, the captured ranging pulses by S3 were used to estimate the range by employing matched filtering and interpolation (1000 points were interpolated between every two samples from the matched filter output). The frequency of the secondary node was locked to the two incoming tones at fr1 and fr2 , which were selected as 4.3 GHz and 4.31 GHz respectively. Before beamforming at the target, the system was calibrated to remove any initial static phase offset; this is a one time calibration that is needed at the system startup. Once calibrated, the coherent signal, which is represented by a 1.5 GHz CW signal, was transmitted from the two nodes to the target, represented by the oscilloscope. The oscilloscope collected the signal for two cases: with and without phase correction. The amplitude of the beamformed signals was captured for both cases with and without phase alignment, and it was compared to the amplitude of the individual signals. In both cases, frequency synchronization was achieved. The data was recorded with 2.5 cm increments, and for every location, 1,500 cycles of the beamformed signals were collected to estimate the amplitude. The secondary node was displaced in total by 20 cm, 152 Figure 6.12: Experimental results showing the coherent beamforming results for the case where the secondary node was displaced. In this scenario, the signals were transmitted using log-periodic antennas, and their were received using a horn antenna. From [15] © 2021 IEEE. which is equal to the wavelength of the coherent signal λc . The results are shown in Fig. 6.12. Similarly to the results in Section 6.1.3, when no phase correction was applied, two minimas were observed when measuring the amplitude of the beamformed signals. Whereas, the amplitude of the beamformed signals was always above 90% of the maximum achievable amplitude when phase correction was applied. This experiment was the first to proved that an open-loop coherent distributed array is achiev- able. However, this experiment represents the simplified scenario where the angle between the nodes is given and only the range needs to be estimated. As a next step, localization technique(s) need to be investigated since in general the angle of the primary node is not given. 153 CHAPTER 7 TWO-DIMENSIONAL LOCALIZATION TECHNIQUE THAT ENABLES BEAMFORMING It has been shown in Chapter 6 that open-loop beamforming is possible in the case where the secondary nodes only need to estimate the range of the primary node. However, in general, the pri- mary node can be located anywhere around the secondary nodes, thus it is important in some cases to estimate the location of the primary node, rather than its range. Two-dimensional localization of the primary node is addressed in this chapter. Localization is achieved by employing the TOA estimation approach [77, 149]. 7.1 TOA Using Only One Secondary Node To synchronize the phase of every secondary node relatively to the phase of the primary node, every secondary node has to be able to localize the primary node. TDOA can be used to localize the primary node. This can be done by having the primary node transmit specific signals to all the secondary nodes at once using predefined time stamps. However, to achieve accurate localiza- tion, which is crucial to the operation of the distributed array, all the nodes need to be perfectly time-aligned. In many cases, especially when accurate localization is needed, the time alignment requirements surpass the achievable experimental results. This can be alleviated by focusing on TOA for localization rather than TDOA. TOA can be performed by first having all the secondary nodes transmit ranging signals towards the primary node, once these signals are received, they are amplified and retransmitted back at a second channel similarly to how ranging was achieved in the previous chapters. This was tested using the architecture in Fig. 7.1. When testing the localization approach, the secondary node(s) were frequency locked using coaxial cables, however, wireless frequency synchronization can be realized as explained in the earlier chapters. The advantage of using the approach in Fig. 7.1 is that it is scalable as long as the ranging waveform is scalable. Since in this approach it is possible to choose any desired ranging wave- 154 Secondary Node 𝑓𝑐1 = 5 𝐺𝐻𝑧 Primary Node Repeater REF IN TX1 𝑓𝑐2 𝑓𝑐2 = 5.4 𝐺𝐻𝑧 RX1 𝑓𝑐2 X310 𝑓𝑐1 TX5 REF IN TX2 RX5 REF OUT RX2 X310 𝑓𝑐2 TX6 REF IN TX3 RX6 REF OUT RX3 𝑓𝑐2 X310 𝑓𝑐1 TX4 REF OUT RX4 RX Chain Lowpass filter Mixer TX Chain 𝐼𝑅𝑋 𝐼𝑇𝑋 Splitter VCO REF IN 0° 0° PLL 90° REF IN PLL 90° Variable 𝑄𝑅𝑋 𝑄𝑇𝑋 amplifier Figure 7.1: Diagram showing the proposed TOA experiment, where the secondary node localizes the primary node using TTSFW signals. From [1] © 2021 IEEE. form, one can use TTSFW to achieve scalable, high accuracy range estimates, which is the case in this experiment. In addition, it does not require accurate time alignment between the primary and secondary node. Furthermore, in general it is better to use multilateration algorithms for localiza- tion compared to estimating the range and angle of the primary node separately. This is the case mainly because it is hard to achieve high accuracy angle estimation generally (below 0.1◦ error). Regarding the multilateration approach used, the approximate maximum likelihood (AML) estimator was selected to find the location of the primary node [150]. In TOA, the time of arrival of the ranging pulse from the primary node (or the range of the primary node) at the receiving anchors (or receivers in our case) is used to form circles with radii equal to the estimated range. Optimally, the primary node would be at the point of intersection. However, due to noise, the radii have range offsets causing the circles to rarely intersect. This is resolved by employing a maximum likelihood (ML) estimator. The ML estimator evaluates the variance of the time delay estimates at every receiver and then select the best estimate for the point of intersection. As explained in [150], the AML estimator solves the following equation to determine the location of the primary node      gi (s + ki − δi2 )  P P P gi xi gi yi   x   2 P = , (7.1)     P 2 P hi xi hi yi y hi (s + ki − δi ) 155 with x − xi gi = , (7.2) ri (ri + δi ) y − yi hi = , (7.3) ri (ri + δi ) where (xi , yi ) are the known locations of the receivers, (x, y) is the location of the transmitter, δi is the estimated distance between the target and the receiver i, ri is the true distance between the target and the receiver i, s = x2 + y 2 , and ki = x2i + yi2 . (7.1) is solved iteratively while being treated as set of linear equations. The CRLB of TOA is derived in [83] and can be calculated from the inverse of the Fisher Information Matrix of the position space Jpos . For a 3 anchors system, Jpos is given by   3 (x−xi )2 (x−xi )(y−yi ) σi2 ri2 σi2 ri2 X Jpos = , (7.4)   2 (x−xi )(y−yi ) (y−yi ) i=1 σi2 ri2 σi2 ri2 where σi2 is the ranging variance at the sensor i, which can be minimized by transmitting high power signals, wideband signals, accurate ranging waveforms, and signals with long durations. Using 3 anchors allows us to localize the transmitter in 2 dimensions. The first element of the CRLB matrix which is obtained from solving (7.4) has the lower bound for the variance of x while the last element has the lower bound for the variance of y. The elements on the off diagonal represent the lower bound for the covariance of x and y. Thus by knowing the variance of the range estimates it is possible to calculate the CRLB for the localization algorithm. 7.2 Experimental Results In the setup that is shown in Fig. 7.2, TX1 was set at the origin (0,0), and the (xi , yi ) coordinates in meters for RX1, RX2, and RX3 were (0.345, 0.384), (-0.408, -0.114), (-0.737, 0.384). In our case, the estimated δi is the distance between the primary node, and the center of TX1 and RXi on the secondary node. This leads to an adjusted location (or virtual location) for the anchors, which in our case would be (0.173, 0.192), (-0.204, -0.057), (-0.368, 0.192). This experiment was achieved using Ettus Research X310 SDRs, each equipped with two UBX 160 daughterboards. Three horn 156 Table 7.1: Standard deviation (STD) of the estimated locations from experimental measurements versus CRLB in meters. From [1] © 2021 IEEE. √ (x,y) STD (x, y) CRLB (x, y) (0, 3.6) (0.0495, 0.0058) (0.048, 0.004) (-0.15, 3.6) (0.05, 0.0035) (0.048, 0.0032) (0.15, 3.6) (0.052, 0.0076) (0.0.048, 0.0055) (-0.12, 3.75) (0.051, 0.0051) (0.05, 0.0033) antennas with 15 dBi gain were connected to TX1, TX5, and RX6, while RX1, RX2, and RX3 were connected to three 10 dBi gain horn antennas. The TTSFW ranging waveform had N = 2, f1 = 500 kHz, BW = 9 MHz, T = 49.9 µs, and Tr = 24.95 µs. Throughout the experiment, the average ranging standard deviations of the 3 receivers were σ1 = 6.08 mm, σ2 = 6.1 mm, and σ3 = 4.58 mm respectively. For these values, the CRLBs for x and y can be computed using (7.4), and the results are shown in Fig. 7.3. The localization CRLB values can be used to analyze the achievable coherent gain for dis- tributed phased arrays. Ten thousand Monte Carlo simulations were evaluated for arrays of sizes N = 2 and 6 nodes to analyze the achievable coherent gain for a 1 GHz coherent frequency. The simulation results are shown in Fig. 7.4. The average achievable coherent gain was evaluated based on the spread area of the nodes. The maximum spread of the secondary nodes was set to ±5 m for x and y coordinates, where the primary node was designated as the reference point and the secondary nodes were spread uniformly around the primary node. As can be seen, A lower average coherent gain is achievable when the average spacing between the primary and secondary nodes increases. The comparison between the values in Fig. 7.3, and the experimental results is shown in Table 7.1 for four points. Since directive horn antennas were used in this experiment, the evaluated locations were all in the broadside to the antennas of the primary node. 157 RX1 TX1 RX2 RX3 Secondary Node Primary Node Repeater TX5 RX6 Figure 7.2: Experimental setup. From [1] © 2021 IEEE. Figure 7.3: Square root of the summation of lower bounds for estimating x and y coordinates of the target at multiple locations. From [1] © 2021 IEEE. 158 Figure 7.4: Ten thousand Monte Carlo simulations showing the average achievable coherent gain for 1 GHz coherent signals using the CRLB results for the localization estimates. Arrays of 2 and 6 nodes were analyzed. From [1] © 2021 IEEE. 7.3 Discussion The standard deviation values in Table 7.1 are in general very low, which is desirable for the co- herent operation of the distributed antenna array. However, one of the main drawbacks in the experimental setup is the use of horn antennas instead of omnidirectional antennas, such as dipole antenna. Horn antennas are directive and their phase center changes depending on the receive an- gle. Thus, whenever the transmit and receive antennas are not perfectly aligned, the gain drastically drops, and the location of the phase center slightly differ, which leads to biases in the localization estimates. On the other hand, omnidirectional antennas suffer from low gain, which will impact the localization accuracy and lead to an increase in the standard deviation by at least an order of magnitude. In addition, omnidirectional antennas are highly prone to multipath interference, leading to highly biased range estimates. Throughout the localization experiment, biases on the order of centimeters were observed. These biases were not only caused by the shift in phase centers, which usually is not supposed to have a very high effect for the chosen positions of the primary node, they were also a result of the multipath interference, which is in general the major contributor to the bias in range estimates. Thus, to enable accurate, unbiased localization, multipath interference needs to be minimized. Multipath interference is not addressed in this dissertation as the targeted application for the pre- 159 sented wireless frequency and phase synchronization approaches is beamforming using a swarm of drones or satellites where multipath interference is minimal. 160 CHAPTER 8 CONCLUSION Open-loop beamforming is an emerging and fairly unexplored area. In this work, I initially focused on determining the some of the system requirements that enable open-loop operations. Since these systems require inter-node coordination, the electrical states need to be synchronized without relying on external communications. Phase alignment was one of the major focus of this work, where every node in the array needs to adjust its phase relatively to an internal reference in the array to enable coherent summation of the transmitted signals in the far field. Centralized architectures were of interest, in which the secondary nodes need to synchronize their parameters relatively to a reference, the primary node. Generally, phase alignment is possible in such architectures if all the secondary nodes can localize the primary node with localization errors orders of magnitude less than the transmitted wavelength. The requirements for range/angle estimation or localization were investigated and clearly linked to the achievable beamforming performance, using the coherent gain as a metric of interest. In addition to the requirements for localization, I investigated the requirements for reliable frequency locking in nodes that can take advantage of PLLs. The pulse repetition intervals for fre- quency synchronization signals, the quality of VCOs and PLLs, and the type of transmitted signals were analyzed. The bound on the achievable coherent gain was evaluated in terms of the frequency synchronization parameters. I showed the benefits of employing continuous frequency/phase lock- ing instead of using pulsed synchronization signals. Once all the requirements for phase and frequency synchronization were studied, accurate range estimation techniques were explored. Two-tone ranging pulses were used for accurate range estimation. Time of flight approach was used for ranging, and a disambiguation pulse was added to disambiguate the output of the matched filter. Adaptive ranging that relies on PI controllers was introduced and tested in indoor and outdoor settings. Adaptive ranging enables high-ranging ac- curacies that meet the system requirements, while minimizing the spectral footprint and reducing 161 interference on surrounding communications bands. I demonstrated a novel frequency synchronization technique that relies on phase locking using an adjunct self-mixing circuit and PLL. Frequency synchronization was achieved using a cen- tralized approach, where the primary node transmit frequency synchronization signals constituted of two-tone CW signals with a tone separation of 10 MHz, and the secondary nodes lock their frequencies to the incoming signals. The 10 MHz output was demodulated using an adjunct self- mixing circuit and was fed to an internal PLL on every secondary node. The phase shift due to frequency locking was tracked and accounted for while aligning the transmitted coherent signals. I demonstrated and implemented the first open-loop beamforming at 1.5 GHz. The experiment was implemented using one primary node and one secondary node. The secondary node was locking its frequency to that of the primary node and it was adjusting its phase depending on the range between the two nodes. Beamforming was achieved using CW signals that alleviate the need for time synchronization and it was done end fire to the array, which is the most challenging direction for beamforming as it requires the highest ranging precision. In addition, I investigated a localization technique that enables the localization of the primary node in the case where both the range and angle of the primary node are unknown. This dissertation lays the ground work for open-loop distributed phased arrays by addressing the fundamental synchronization requirements for phase and frequency. Open-loop beamforming is an emerging area of interest as it enables the beamforming of not only communications signals, but also sensing signals, which is a luxury that conventional coherent distributed arrays do not possess. The target of this dissertation is to increase the performance or transmit power from mobile and dynamic arrays such as drone swarms or satellite constellations. Using the presented synchronization techniques, such systems are now able to perform long range communications and sensing by combining the individually transmitted power from all the elements in a coherent manner without relying on any external systems. 162 APPENDICES 163 APPENDIX A MATLAB RANGING REQUIREMENTS FOR COHERENT DISTRIBUTED ARRAYS clear % close all c = 3 e8 ; f = 0 . 3 e9 ; wavelength = c / f ; Fs = f * 1 0 ; t = 0 : 1 / Fs : 1 e − 7 ; Ovar = ( w a v e l e n g t h * 0 . 1 * p i / 1 8 0 ) ^ 2 ; % Ovar n e e d s t o be s e t t o z e r o i n t h e c a s e where o n l y r a n g i n g e r r o r s a r e % c o n s i d e r e d and n o t a n g l e e s t i m a t i o n e r r o r s Rvar = ( w a v e l e n g t h * 0 . 0 1 ) ^ 2 ; t r i a l s = 10000; % number o f s e c o n d a r y n o d e s ( 4 o p t i o n s ) : secondaryNBR = [ 1 3 9 9 9 ] ; % s t d R a n g i n g _ 0 = l o g s p a c e ( 1 , 5 , 5 0 ) / 5 e5 ; % stdRanging_0 = l i n s p a c e (0 , 0.3 , 150); % maximum x and y c o o r d i n a t e s : max_xy = l i n s p a c e ( 1 , 1 0 0 , 1 0 0 ) ; prevjj = 1; f p r i n t f ( ’ S e c o n d a r y o p t i o n , and max xy o p t i o n : 0 , 0 \ n ’ ) ; f o r i i = 1 : l e n g t h ( secondaryNBR ) f o r j j = 1 : l e n g t h ( max_xy ) f p r i n t f ( [ r e p m a t ( ’ \ b ’ , 1 , 3 + max ( c e i l ( l o g 1 0 ( a b s ( i i ) ) ) , 1 ) + ... max ( c e i l ( l o g 1 0 ( a b s ( j j ) ) ) + ( p r e v j j == l e n g t h ( max_xy ) ) * ( c e i l ( . . . log10 ( abs ( p r e v j j ) ) ) + 1 ) , 1 ) ) , num2str ( i i ) , ’ , ’ , num2str ( j j ) , ’ \ n ’ ] ) ; 164 prevjj = jj ; for t t = 1 : t r i a l s f o r kk = 1 : secondaryNBR ( i i ) % remove t h e c a s e s where t h e s e c o n d a r y n o d e s a r e w i t h i n % 0 . 5 m i n x and y t s t = rand ; if t s t < 0.5 x _ a c t u a l = u n i f r n d ( 0 . 5 , max_xy ( j j ) ) ; y _ a c t u a l = u n i f r n d ( 0 . 5 , max_xy ( j j ) ) ; else x _ a c t u a l = u n i f r n d ( − max_xy ( j j ) , −0.5); y _ a c t u a l = u n i f r n d ( − max_xy ( j j ) , −0.5); end x = x_actual ; y = y_actual ; % t r a n s f o r m i n g x and y c o o r d i n a t e s t o r a n g e and a n g l e R = s q r t ( x ^2 + y ^ 2 ) ; O = atan2d (y , x ) ; % variable transformation for the variance Xvar = Rvar * c o s (O) ^ 2 + Ovar * R^2 * s i n (O ) ^ 2 ; Yvar = Rvar * s i n (O) ^ 2 + Ovar * R^2 * c o s (O ) ^ 2 ; XYcov = ( Rvar − Ovar * R^ 2 ) * s i n (O) * c o s (O ) ; x _ g e n e r a t e d = normrnd ( x _ a c t u a l , s q r t ( Xvar ) , 1 ) ; y _ g e n e r a t e d = normrnd ( y _ a c t u a l , s q r t ( Yvar ) , 1 ) ; s e p D i s t _ a c t u a l = s q r t ( ( x _ a c t u a l − 0)^2 + ( y _ a c t u a l − 0 ) ^ 2 ) ; s e p D i s t _ g e n e r a t e d = s q r t ( ( x_generated −0)^2+( y_generated − 0 ) ^ 2 ); 165 secondaryOrientation_actual = atan ( y_actual / x_actual ) ; secondaryOrientation_generated = atan ( y_generated / . . . x_generated ) ; o r i e n t A n g l e = 2* p i * r a n d ; % Uncomment i n t h e c a s e where no f r e q u e n c y l o c k i n g c i r c u i t % is considered s e c o n d a r y _ s i g ( 1 : l e n g t h ( t ) , kk ) = s i n ( 2 * p i * f * t + 2 * p i * ( 0 . . . + sin ( orientAngle + secondaryOrientation_actual )) * . . . s e p D i s t _ a c t u a l / wavelength − (2* pi * (0 + . . . sin ( orientAngle + secondaryOrientation_generated ) ) . . . * sepDist_generated / wavelength ) ) ; % Uncomment i n t h e c a s e where f r e q u e n c y l o c k i n g c i r c u i t % is considered s e c o n d a r y _ s i g _ f l ( 1 : l e n g t h ( t ) , kk ) = s i n ( 2 * p i * f * t − 2 * p i ... * (1+ s i n ( o r i e n t A n g l e + s e c o n d a r y O r i e n t a t i o n _ a c t u a l ) ) ... * s e p D i s t _ a c t u a l / wavelength + (2* pi * (1 + . . . sin ( orientAngle + secondaryOrientation_generated )) ... * sepDist_generated / wavelength ) ) ; end % s i g n a l from p r i m a r y node : Master_sig = s i n (2* pi * f * t ) ; % one waveform i s s e l e c t e d by : M a s t e r _ s i g ( 1 , : ) Summ_rmsAmp ( t t ) = ( rms ( sum ( s e c o n d a r y _ s i g ( : , 1 : . . . secondaryNBR ( i i ) ) , 2 ) + M a s t e r _ s i g ’ ) ) ^ 2 ; Summ_rmsAmp_fl ( t t ) = ( rms ( sum ( s e c o n d a r y _ s i g _ f l ( : , 1 : . . . secondaryNBR ( i i ) ) , 2 ) + M a s t e r _ s i g ’ ) ) ^ 2 ; 166 end % e s t i m a t i o n o f t h e beamformed s i g n a l a m p l i t u d e and c o m p a r i s o n w i t h % t h e 90% t h r e s h o l d r e f e r e n c e _ r m s A m p = 0 . 9 * ( rms ( ( 1 + secondaryNBR ( i i ) ) . * M a s t e r _ s i g ) ) ^ 2 ; P _ 9 0 C o h e r e n c e ( j j , i i ) = sum ( Summ_rmsAmp >= . . . r e f e r e n c e _ r m s A m p ) / l e n g t h ( Summ_rmsAmp ) ; P _ 9 0 C o h e r e n c e _ f l ( j j , i i ) = sum ( Summ_rmsAmp_fl >= . . . r e f e r e n c e _ r m s A m p ) / l e n g t h ( Summ_rmsAmp_fl ) ; C o h e r e n c e ( j j , i i ) = mean ( Summ_rmsAmp . / . . . ( rms ( ( 1 + secondaryNBR ( i i ) ) . * M a s t e r _ s i g ) ) ^ 2 ) ; C o h e r e n c e _ f l ( j j , i i ) = mean ( Summ_rmsAmp_fl . / . . . ( rms ( ( 1 + secondaryNBR ( i i ) ) . * M a s t e r _ s i g ) ) ^ 2 ) ; end end % plotting : figure (1) hold off p l o t ( max_xy , smooth ( P _ 9 0 C o h e r e n c e ( : , 1 ) ) ) h o l d on p l o t ( max_xy , smooth ( P _ 9 0 C o h e r e n c e ( : , 2 ) ) ) p l o t ( max_xy , smooth ( P _ 9 0 C o h e r e n c e ( : , 3 ) ) ) p l o t ( max_xy , smooth ( P _ 9 0 C o h e r e n c e ( : , 4 ) ) ) l e g e n d ( [ i n t 2 s t r ( secondaryNBR ( 1 ) + 1 ) ’ n o d e s no FLC ’ ] , ... [ i n t 2 s t r ( secondaryNBR ( 2 ) + 1 ) ’ nodes ’ ] , [ i n t 2 s t r ( secondaryNBR ( 3 ) + 1 ) ... ’ nodes ’ ] , [ i n t 2 s t r ( secondaryNBR ( 4 ) + 1 ) ’ nodes ’ ] , ’ L o c a t i o n ’ , ’ s o u t h e a s t ’ ) g r i d on x l a b e l ( ’ Maximum \ pm x and \ pm y c o o r d i n a t e s o f s e c o n d a r y n o d e s (m) ’ ) y l a b e l ( ’ P ( G_c > 0 . 9 ) ’ ) ylim ( [ 0 , 1 ] ) 167 figure (2) hold off p l o t ( max_xy , smooth ( P _ 9 0 C o h e r e n c e _ f l ( : , 1 ) ) ) h o l d on p l o t ( max_xy , smooth ( P _ 9 0 C o h e r e n c e _ f l ( : , 2 ) ) ) p l o t ( max_xy , smooth ( P _ 9 0 C o h e r e n c e _ f l ( : , 3 ) ) ) p l o t ( max_xy , smooth ( P _ 9 0 C o h e r e n c e _ f l ( : , 4 ) ) ) l e g e n d ( [ i n t 2 s t r ( secondaryNBR ( 1 ) + 1 ) ’ nodes ’ ] , [ i n t 2 s t r ( secondaryNBR ( 2 ) + 1 ) . . . ’ nodes ’ ] , [ i n t 2 s t r ( secondaryNBR ( 3 ) + 1 ) ’ nodes ’ ] , [ i n t 2 s t r ( . . . secondaryNBR ( 4 ) + 1 ) ’ nodes ’ ] , ’ L o c a t i o n ’ , ’ s o u t h e a s t ’ ) g r i d on x l a b e l ( ’ Maximum \ pm x and \ pm y c o o r d i n a t e s o f s e c o n d a r y n o d e s (m) ’ ) y l a b e l ( ’ P ( G_c > 0 . 9 ) ’ ) ylim ( [ 0 , 1 ] ) figure (3) hold off p l o t ( max_xy , smooth ( C o h e r e n c e ( : , 1 ) ) ) h o l d on p l o t ( max_xy , smooth ( C o h e r e n c e ( : , 2 ) ) ) p l o t ( max_xy , smooth ( C o h e r e n c e ( : , 3 ) ) ) p l o t ( max_xy , smooth ( C o h e r e n c e ( : , 4 ) ) ) l e g e n d ( [ i n t 2 s t r ( secondaryNBR ( 1 ) + 1 ) ’ n o d e s no FLC ’ ] , [ i n t 2 s t r ( . . . secondaryNBR ( 2 ) + 1 ) ’ nodes ’ ] , [ i n t 2 s t r ( secondaryNBR ( 3 ) + 1 ) ’ nodes ’ ] . . . , [ i n t 2 s t r ( secondaryNBR ( 4 ) + 1 ) ’ nodes ’ ] , ’ L o c a t i o n ’ , ’ s o u t h e a s t ’ ) g r i d on x l a b e l ( ’ Maximum \ pm x and \ pm y c o o r d i n a t e s o f s e c o n d a r y n o d e s (m) ’ ) y l a b e l ( ’ $ \ b a r {G_c}$ ’ , ’ I n t e r p r e t e r ’ , ’ l a t e x ’ ) ylim ( [ 0 , 1 ] ) figure (4) 168 hold off p l o t ( max_xy , smooth ( C o h e r e n c e _ f l ( : , 1 ) ) ) h o l d on p l o t ( max_xy , smooth ( C o h e r e n c e _ f l ( : , 2 ) ) ) p l o t ( max_xy , smooth ( C o h e r e n c e _ f l ( : , 3 ) ) ) p l o t ( max_xy , smooth ( C o h e r e n c e _ f l ( : , 4 ) ) ) l e g e n d ( [ i n t 2 s t r ( secondaryNBR ( 1 ) + 1 ) ’ nodes ’ ] , [ i n t 2 s t r ( secondaryNBR ( 2 ) + 1 ) . . . ’ nodes ’ ] , [ i n t 2 s t r ( secondaryNBR ( 3 ) + 1 ) ’ nodes ’ ] , [ i n t 2 s t r ( . . . secondaryNBR ( 4 ) + 1 ) ’ nodes ’ ] , ’ L o c a t i o n ’ , ’ s o u t h e a s t ’ ) g r i d on x l a b e l ( ’ Maximum \ pm x and \ pm y c o o r d i n a t e s o f s e c o n d a r y n o d e s (m) ’ ) y l a b e l ( ’ $ \ b a r {G_c}$ ’ , ’ I n t e r p r e t e r ’ , ’ l a t e x ’ ) ylim ( [ 0 , 1 ] ) %% t e s t i n t h e f o r m u l f o r summation o f s i n e waves clear % close all c = 3 e8 ; f = 0 . 3 e9 ; wavelength = c / f ; Fs = f * 1 0 ; t = 0 : 1 / Fs : 1 e − 7 ; % OvarFactor = 0 . 1 ; OvarFactor = [2 , 1 , 0.7 , 0.5 , 0.3 , 0.2 , 0.1 , 0.07 , 0.05 , 0.03 , 0.025 , ... 0.02 , 0.01]; % RvarFactor = 0.01; RvarFactor = [0.04 , 0.01 , 0.001]; OvarVec = ( w a v e l e n g t h . * O v a r F a c t o r . * p i . / 1 8 0 ) . ^ 2 ; RvarVec = ( w a v e l e n g t h . * R v a r F a c t o r ) . ^ 2 ; 169 t r i a l s = 5000; % t r i a l s = 100; % secondaryNBR = [ 1 3 9 9 9 ] ; secondaryNBR = 9 ; % s t d R a n g i n g _ 0 = l o g s p a c e ( 1 , 5 , 5 0 ) / 5 e5 ; % stdRanging_0 = l i n s p a c e (0 , 0.3 , 150); max_xy = l i n s p a c e ( 1 , 1 0 0 , 1 0 0 ) ; f o r vv = 1 : l e n g t h ( RvarVec ) Rvar = RvarVec ( vv ) ; f o r uu = 1 : l e n g t h ( OvarVec ) Ovar = OvarVec ( uu ) ; prevjj = 1; f p r i n t f ( ’ S e c o n d a r y o p t i o n , and max xy o p t i o n : 0 , 0 \ n ’ ) ; f o r i i = 1 : l e n g t h ( secondaryNBR ) f o r j j = 1 : l e n g t h ( max_xy ) f p r i n t f ( [ r e p m a t ( ’ \ b ’ , 1 , 3 + max ( c e i l ( l o g 1 0 ( a b s ( vv ) ) ) , 1 ) + ... max ( c e i l ( l o g 1 0 ( a b s ( uu ) ) ) + ( p r e v j j == l e n g t h ( max_xy ) ) * . . . ( c e i l ( l o g 1 0 ( a b s ( p r e v j j ) ) ) + 1 ) , 1 ) ) , n u m 2 s t r ( vv ) , ’, ’ ,... n u m 2 s t r ( uu ) , ’\n ’ ] ) ; prevjj = jj ; for t t = 1 : t r i a l s f o r kk = 1 : secondaryNBR ( i i ) t s t = rand ; if t s t < 0.5 x _ a c t u a l = u n i f r n d ( 0 . 5 , max_xy ( j j ) ) ; y _ a c t u a l = u n i f r n d ( 0 . 5 , max_xy ( j j ) ) ; else 170 x _ a c t u a l = u n i f r n d ( − max_xy ( j j ) , −0.5); y _ a c t u a l = u n i f r n d ( − max_xy ( j j ) , −0.5); end x = x_actual ; y = y_actual ; O = atan2d (y , x ) ; R = s q r t ( x ^2 + y ^ 2 ) ; Xvar = Rvar * c o s (O) ^ 2 + Ovar * R^2 * s i n (O ) ^ 2 ; Yvar = Rvar * s i n (O) ^ 2 + Ovar * R^2 * c o s (O ) ^ 2 ; XYcov = ( Rvar − Ovar * R^ 2 ) * s i n (O) * c o s (O ) ; x _ g e n e r a t e d = normrnd ( x _ a c t u a l , s q r t ( Xvar ) , 1 ) ; y _ g e n e r a t e d = normrnd ( y _ a c t u a l , s q r t ( Yvar ) , 1 ) ; s e p D i s t _ a c t u a l = s q r t ( ( x _ a c t u a l −0)^2 + ( y _ a c t u a l − 0 ) ^ 2 ) ; s e p D i s t _ g e n e r a t e d = s q r t ( ( x_generated − 0)^2 + . . . ( y_generated − 0)^2); secondaryOrientation_actual = atan ( y_actual / x_actual ) ; secondaryOrientation_generated = . . . atan ( y_generated / x_generated ) ; o r i e n t A n g l e = 2* p i * r a n d ; % Uncomment i n t h e c a s e where no f r e q u e n c y l o c k i n g % c i r c u i t is considered s e c o n d a r y _ s i g ( 1 : l e n g t h ( t ) , kk ) = s i n ( 2 * p i * f * t + 2 * p i * . . . (0 + sin ( orientAngle + s e c o n d a r y O r i e n t a t i o n _ a c t u a l ) ) . . . * s e p D i s t _ a c t u a l / wavelength − (2* pi * (0 + s i n ( . . . 171 orientAngle + secondaryOrientation_generated )) * . . . sepDist_generated / wavelength ) ) ; % Uncomment i n t h e c a s e where f r e q u e n c y l o c k i n g c i r c u i t i s c o n s i d e r e d s e c o n d a r y _ s i g _ f l ( 1 : l e n g t h ( t ) , kk ) = s i n ( 2 * p i * f * t ... − 2* p i * ( 1 + s i n ( o r i e n t A n g l e + . . . secondaryOrientation_actual )) * sepDist_actual / ... wavelength + (2* pi * (1 + s i n ( orientAngle + . . . secondaryOrientation_generated )) * sepDist_generated . . . / wavelength ) ) ; end Master_sig = s i n (2* pi * f * t ) ; % one waveform i s s e l e c t e d by : M a s t e r _ s i g ( 1 , : ) Summ_rmsAmp ( t t ) = ( rms ( sum ( s e c o n d a r y _ s i g ( : , 1 : . . . secondaryNBR ( i i ) ) , 2 ) + M a s t e r _ s i g ’ ) ) ^ 2 ; Summ_rmsAmp_fl ( t t ) = ( rms ( sum ( s e c o n d a r y _ s i g _ f l ( : , 1 : . . . secondaryNBR ( i i ) ) , 2 ) + M a s t e r _ s i g ’ ) ) ^ 2 ; end r e f e r e n c e _ r m s A m p = 0 . 9 * ( rms ( ( 1 + secondaryNBR ( i i ) ) . * . . . M a s t e r _ s i g ) ) ^ 2 ; % 90% c o h e r e n c e g a i n P _ 9 0 C o h e r e n c e ( j j , i i ) = sum ( Summ_rmsAmp >= . . . r e f e r e n c e _ r m s A m p ) / l e n g t h ( Summ_rmsAmp ) ; P _ 9 0 C o h e r e n c e _ f l ( j j , i i ) = sum ( Summ_rmsAmp_fl >= . . . r e f e r e n c e _ r m s A m p ) / l e n g t h ( Summ_rmsAmp_fl ) ; C o h e r e n c e ( j j , i i ) = mean ( Summ_rmsAmp . / . . . ( rms ( ( 1 + secondaryNBR ( i i ) ) . * M a s t e r _ s i g ) ) ^ 2 ) ; C o h e r e n c e _ f l ( j j , i i ) = mean ( Summ_rmsAmp_fl . / . . . ( rms ( ( 1 + secondaryNBR ( i i ) ) . * M a s t e r _ s i g ) ) ^ 2 ) ; 172 end end l o c = f i n d ( smooth ( P _ 9 0 C o h e r e n c e _ f l ( : , 1 ) ) > 0 . 9 == 1 , 1 , ’ l a s t ’ ) ; i f isempty ( loc ) loc = 0; end l o c a t i o n O f 9 0 ( vv , uu ) = l o c ; end end locationOf90y = locationOf90 ; locationOf90 ( locationOf90 ==100)=100.1; figure (5) hold off s e m i l o g y ( l o c a t i o n O f 9 0 ( 1 , : ) , ( O v a r F a c t o r ) , ’ LineWidth ’ , 2 ) h o l d on s e m i l o g y ( l o c a t i o n O f 9 0 ( 2 , : ) , ( O v a r F a c t o r ) , ’ LineWidth ’ , 2 ) s e m i l o g y ( l o c a t i o n O f 9 0 ( 3 , : ) , ( O v a r F a c t o r ) , ’ LineWidth ’ , 2 ) legend ( . . . [ ’ $ \ sigma_R$ = ’ n u m 2 s t r ( R v a r F a c t o r ( 1 ) ) ’ $ \ lambda$ ’ ] , [ ’ $ \ sigma_R$ = ’... n u m 2 s t r ( R v a r F a c t o r ( 2 ) ) ’ $ \ lambda$ ’ ] , [ ’ $ \ sigma_R$ = ’ n u m 2 s t r ( . . . R v a r F a c t o r ( 3 ) ) ’ $ \ lambda$ ’ ] , ’ I n t e r p r e t e r ’ , ’ l a t e x ’ , ’ L o c a t i o n ’ , ’ n o r t h e a s t ’ ) g r i d on x l a b e l ( ’ Maximum \ pm x and \ pm y c o o r d i n a t e s o f s e c o n d a r y n o d e s (m) ’ ) y l a b e l ( ’ $ \ s i g m a _ \ t h e t a ( ^ { \ c i r c } . \ lambda ) $ ’ , ’ I n t e r p r e t e r ’ , ’ l a t e x ’ ) y l i m ( [ min ( O v a r F a c t o r ) max ( O v a r F a c t o r ) ] ) x l i m ( [ min ( max_xy ) max ( max_xy ) ] ) 173 APPENDIX B MATLAB FREQUENCY SYNCHRONIZATION REQUIREMENTS FOR NODES WITH PLL clear t r i a l s = 1000; c = 3 e8 ; c a r r i e r s = [ 1 e9 , 5 e9 , 10 e9 ] ; % c a r r i e r s = [ 0 . 0 0 1 e9 , 0 . 0 0 5 e9 , 0 . 0 2 e9 , 0 . 0 5 e9 , 0 . 0 8 e9 , 0 . 1 e9 0 . 5 e9 , ... % 1 e9 , 3 e9 , 6 e9 , 9 e9 , 12 e9 , 15 e9 , 16 e9 , 17 e9 ] ; % c a r r i e r s = [ 3 7 5 0 e9 , 2 5 0 0 e9 , 1000 e9 , 500 e9 , 200 e9 , 1 0 0 e9 , 60 e9 , 25 e9 , . . . % 15 e9 , 5 e9 , 1 e9 , 0 . 5 e9 ] ; wavelength_car = c . / carriers ; updateRateVec = l o g s p a c e ( −2 , 2 , 4 0 ) ; % Choose one o f t h e f o l l o w i n g : % P h a s e n o i s e d a t a f o r VCO A P N _ f r e q = [ 1 0 , 1 0 0 , 1 e3 , 10 e3 , 100 e3 , 50 e6 ] ; PN_dB_Hz = [ − 6 0 , −90 , −120 , −145 , −155 , − 1 5 5 ] ; % P h a s e n o i s e d a t a from VCO B P N _ f r e q = [ 1 0 , 1 0 0 , 1 e3 , 10 e3 , 100 e3 , 50 e6 ] ; PN_dB_Hz = [ − 1 1 0 , −140 , −165 , −185 , −190 , − 1 9 0 ] ; % P h a s e n o i s e d a t a f f o r c a s e 3 : VCOs A and B P N _ f r e q = [ 1 0 , 4 5 , 1 e3 , 2 2 5 0 , 10 e3 , 100 e3 , 50 e6 ] ; PN_dB_Hz = [ − 1 1 0 , −130 , −130 , −130 , −145 , −155 , − 1 5 5 ] ; % Jitter calculation A1 = 1 0 ^ ( ( ( PN_dB_Hz ( 2 ) + PN_dB_Hz ( 1 ) ) / 2 + 10 * l o g 1 0 ( P N _ f r e q ( 2 ) . . . − PN_freq ( 1 ) ) ) / 1 0 ) ; A2 = 1 0 ^ ( ( ( PN_dB_Hz ( 3 ) + PN_dB_Hz ( 1 ) ) / 2 + 10 * l o g 1 0 ( P N _ f r e q ( 3 ) . . . 174 − PN_freq ( 2 ) ) ) / 1 0 ) ; A3 = 1 0 ^ ( ( ( PN_dB_Hz ( 4 ) + PN_dB_Hz ( 3 ) ) / 2 + 10 * l o g 1 0 ( P N _ f r e q ( 4 ) . . . − PN_freq ( 3 ) ) ) / 1 0 ) ; A4 = 1 0 ^ ( ( ( PN_dB_Hz ( 5 ) + PN_dB_Hz ( 4 ) ) / 2 + 10 * l o g 1 0 ( P N _ f r e q ( 5 ) . . . − PN_freq ( 4 ) ) ) / 1 0 ) ; A5 = 1 0 ^ ( ( ( PN_dB_Hz ( 6 ) + PN_dB_Hz ( 5 ) ) / 2 + 10 * l o g 1 0 ( P N _ f r e q ( 6 ) . . . − PN_freq ( 5 ) ) ) / 1 0 ) ; A = 10 * l o g 1 0 ( A1 + A2 + A3 + A4 + A5 ) ; R M S _ p h a s e _ j i t t e r _ r a d = s q r t ( 2 * 1 0 ^ (A / 1 0 ) ) ; R M S _ p h a s e _ j i t t e r _ s e c = s q r t ( 2 * 1 0 ^ (A / 1 0 ) ) / ( 2 * p i * f 0 ) ; % Number o f Nodes : nodesNumberVect = [ 2 , 1 0 , 2 0 , 1 0 0 ] ; % nodesNumberVect = [ 2 , 1 0 0 ] ; %% w i t h o u t t i m e e r r o r s for p = 1 : t r i a l s for i = 1 : length ( carriers ) f o r j = 1 : l e n g t h ( updateRateVec ) f o r k = 1 : l e n g t h ( nodesNumberVect ) nodesNumber = nodesNumberVect ( k ) ; updateRate = updateRateVec ( j ) ; carrier = carriers ( i ); wavelength_car = c . / carrier ; updateTime = 1 . / updateRate ; % Sampling r a t e 175 Fs = c a r r i e r * 1 0 0 ; % p u l s e W i d t h = 1 / ( 1 0 0 e6 ) ; % f o r a 100 MHz b a n d w i d t h pulseWidth = 100/ c a r r i e r ; t = u p d a t e T i m e − p u l s e W i d t h : 1 / Fs : u p d a t e T i m e ; samples = length ( t ) ; % A l l a n D e v i a t i o n d a t a ( m o d i f i e d b a s e d on o s c i l l a t o r ) tau = 1; AllanDev = 1.00 e −10; % s % TDEV d a t a ( m o d i f i e d b a s e d on o s c i l l a t o r ) tau_t = 0.5; TDEV = 8 e − 1 1 ; freqRMS_tau = A l l a n D e v * f 0 ; f r e q O f f s e t S T D = freqRMS_tau * ( u p d a t e T i m e / t a u );%@ r e f r e s h r a t e f r e q O f f s e t = normrnd ( 0 , f r e q O f f s e t S T D , [ 1 , nodesNumber ] ) ; s i g n a l s = z e r o s ( nodesNumber , s a m p l e s ) ; f o r kk = 1 : nodesNumber % Random s e p a r a t i o n d i s t a n c e and p h a s e e r r o r s e p D i s t = 10 * p i * r a n d n ; p h a s e E r r o r = normrnd ( 0 , R M S _ p h a s e _ j i t t e r _ r a d , [ 1 , s a m p l e s ] ) ; % P h a s e o f f s e t due t o f r e q u e n c y d r i f t phi_0 = 2 * pi * ( f0 * updateTime + 0.5 * updateTime . . . * f r e q O f f s e t ( kk ) ) − 2 * p i * ( f 0 * u p d a t e T i m e . . . + u p d a t e T i m e * f r e q O f f s e t ( kk ) ) ; % F o r c o n t i n u o u s waves : 176 % s i g n a l s ( k , 1 : s a m p l e s ) = exp ( ( c a r r i e r / f 0 ) . . . % . * ( 1 j * 2* p i * f 0 * t + 1 j . * p h a s e E r r o r ) ) ; % F o r p u l s e d waves : s i g n a l s ( kk , 1 : s a m p l e s ) = exp ( ( c a r r i e r / f 0 ) . * ( 1 j * 2 * p i . . . * ( f 0 + f r e q O f f s e t ( kk ) ) . * t + 1 j . * ( p h i _ 0 + p h a s e E r r o r ) ) ) ; end % Signals sumation b e a m f o r m e d S i g n a l s = sum ( s i g n a l s , 1 ) ; i d e a l B e a m f o r m i n g = nodesNumber * s i g n a l s ( 1 , : ) ; % Coherent gain g a i n C o h e r e n c e ( i , j , k , p ) = ( rms ( b e a m f o r m e d S i g n a l s ) ^ 2 ) . . . / ( rms ( i d e a l B e a m f o r m i n g ) ^ 2 ) ; end end end end % A v e r a g e v a l u e f o r a l l t h e c o h e r e n t g a i n d a t a ( d e p e n d i n g on t i m e and % number o f n o d e s ) f i n a l G a i n C o h e r e n c e = mean ( g a i n C o h e r e n c e ( : , : , : , : ) , 4 ) ; %% With t i m i n g e r r o r s for p = 1 : t r i a l s for i = 1 : length ( carriers ) f o r j = 1 : l e n g t h ( updateRateVec ) 177 f o r k = 1 : l e n g t h ( nodesNumberVect ) nodesNumber = nodesNumberVect ( k ) ; updateRate = updateRateVec ( j ) ; carrier = carriers ( i ); wavelength_car = c . / carrier ; updateTime = 1 . / updateRate ; Fs = c a r r i e r * 1 0 0 ; p u l s e W i d t h = 1 / ( 1 0 0 e6 ) ; % f o r a 100 MHz b a n d w i d t h pulseWidth = 100/ c a r r i e r ; t = u p d a t e T i m e − p u l s e W i d t h : 1 / Fs : u p d a t e T i m e ; samples = length ( t ) ; % Allan Deviation data tau = 1; AllanDev = 1.00 e −10; % s % TDEV d a t a tau_t = 0.5; TDEV = 8 e − 1 1 ; freqRMS_tau = A l l a n D e v * f 0 ; f r e q O f f s e t S T D = freqRMS_tau * ( u p d a t e T i m e / t a u );%@ r e f r e s h r a t e f r e q O f f s e t = normrnd ( 0 , f r e q O f f s e t S T D , [ 1 , nodesNumber ] ) ; t i m e O f f s e t S T D _ u p d a t e T i m e = TDEV * ( u p d a t e T i m e / t a u ) ; t i m e O f f s e t = normrnd ( 0 , t i m e O f f s e t S T D _ u p d a t e T i m e , [ 1 , nodesNumber ] ) ; s i g n a l s _ T O = z e r o s ( nodesNumber , s a m p l e s ) ; 178 s i g n a l s _ w T i m e O f f s e t = z e r o s ( nodesNumber , s a m p l e s ) ; % Mudulation b = [1 0 1 0 1 0 1 0 1 0 ] ; bitTime = pulseWidth / 1 0 ; f o r kk = 1 : nodesNumber s e p D i s t = 10 * p i * r a n d n ; p h a s e E r r o r = normrnd ( 0 , R M S _ p h a s e _ j i t t e r _ r a d , [ 1 , s a m p l e s ] ) ; t i m e E r r o r = normrnd ( t i m e O f f s e t ( kk ) , R M S _ p h a s e _ j i t t e r _ s e c , . . . [1 , samples ] ) ; timeErrorIntegrated = []; t t t = 1; w h i l e t t t <= l e n g t h ( t ) i f t t t <50 timeErrorIntegrated ( t t t ) = 0; e l s e i f t t t > l e n g t h ( t ) −100 timeErrorIntegrated ( t t t ) = 1; else i f a b s ( rem ( ( f l o o r ( ( t ( t t t ) + t i m e E r r o r ( t t t ) − t ( 1 ) ) . . . . / b i t T i m e ) ) , 2 ) ) ~= a b s ( rem ( ( f l o o r ( ( t ( t t t − 1 ) + . . . t i m e E r r o r ( t t t −1) − t ( 1 ) ) . / b i t T i m e ) ) , 2 ) ) t i m e E r r o r I n t e g r a t e d ( t t t ) = a b s ( rem ( ( f l o o r ( ( t ( t t t ) . . . + timeError ( t t t )− t ( 1 ) ) . / bitTime ) ) , 2 ) ) ; t i m e E r r o r I n t e g r a t e d ( t t t +1: t t t +50) = ones ( 1 , 5 0 ) . * . . . a b s ( rem ( ( f l o o r ( ( t ( t t t ) + t i m e E r r o r ( t t t ) − t ( 1 ) ) . / b i t T i m e ) ) , 2 ) ) ; t t t = t t t +50; else t i m e E r r o r I n t e g r a t e d ( t t t ) = a b s ( rem ( . . . ( f l o o r ( ( t ( t t t ) + timeError ( t t t )− t ( 1 ) ) . / bitTime ) ) , 2 ) ) ; 179 end end t t t = t t t +1; end phi_0 = 2 * pi * ( f0 * updateTime + 0.5 * updateTime * . . . f r e q O f f s e t ( kk ) ) − 2 * p i * ( f 0 * u p d a t e T i m e + u p d a t e T i m e * f r e q O f f s e t ( kk ) ) ; s i g n a l s _ T O ( kk , 1 : s a m p l e s ) = ( 2 * t i m e E r r o r I n t e g r a t e d − 1 ) . * . . . c o s ( ( c a r r i e r / f 0 ) . * ( 1 * 2 * p i * ( f 0 + f r e q O f f s e t ( kk ) ) . . . . * ( t ) + 1 .* phaseError + phi_0 ) ) ; end b e a m f o r m e d S i g n a l s _ T O = sum ( s i g n a l s _ T O , 1 ) ; i d e a l B e a m f o r m i n g _ T O = nodesNumber * s i g n a l s _ T O ( 1 , : ) ; g a i n C o h e r e n c e _ T O ( i , j , k , p ) = ( rms ( b e a m f o r m e d S i g n a l s _ T O ) ^ 2 ) . . . / ( rms ( i d e a l B e a m f o r m i n g _ T O ) ^ 2 ) ; end end end end f i n a l G a i n C o h e r e n c e _ T O = mean ( g a i n C o h e r e n c e _ T O ( : , : , : , : ) , 4 ) ; 180 APPENDIX C MATLAB PEAK ESTIMATORS FOR RANGING clear F1 = [ ] ; F2 = [ ] ; actDistVec = [ ] ; errors = []; delay_ex_r = [ ] ; delay_ex_t = [ ] ; errors_sinc = []; delay_ex_r_sinc = [ ] ; InterSampPts = 20; FreqPts = 100; Fs = 10 e6 ; % s a m p l i n g r a t e c = 3 e8 ; % s p e e d o f l i g h t % Tb = c e i l ( 0 . 9 * 6 0 0 0 ) ; % t o c o n t r o l t h e p u l s e w i d d t h Tb = c e i l ( 0 . 2 5 * 3 9 9 2 ) ; % t o c o n t r o l t h e p u l s e w i d d t h PW = ( Tb − 1 ) / Fs ; % p u l s e w i d t h t = 0 : 1 / Fs : 2 * ( Tb − 1 ) / Fs ; % t i m e v e c t o r ( t w i c e t h e PW) f 2 = 20 e3 ; % s e c o n d t o n e f 1 _ 0 = l i n s p a c e ( f 2 * 3 , Fs / 2 − f 2 * 3 , F r e q P t s ) ; % l i s t o f f i r s t t o n e for j j = 1 : FreqPts %% c o n t r o l l i n g t h e f r e q e n c y ( f o r two − t o n e o r t h e b a n d w i d t h o f LFM) f 1 = Fs / ( 4 ) ; % t h i s makes t h e b a n d w i d t h c o n s t a n t 181 f 1 = f 1 _ 0 ( j j ) ; % t h i s makes t h e b a n d w i d t h v a r i a b l e for k = 1 : InterSampPts % k % j u s t f o r debugging purposes %% p h a s e c o n s t a n t f o r TX and RX s i g n a l s p h i _ i n i t = 2* p i * r a n d ; p h i _ i n i t = 0 ; % s e t t i n g t h e TX p h a s e t o z e r o p h i _ i n i t _ o f f = rand *360; phi = p h i _ i n i t + p h i _ i n i t _ o f f *( pi / 1 8 0 ) ; % p h i = p h i _ i n i t ; % s e t t i n g t h e RX p h a s e e q u a l t o t h e TX p h a s e %% c o n t r o l l i n g t h e RX d e l a y e i t h e r by s l e l e c t i n g t h e d i s t a n c e o r t i m e % %%%% f o r d i s t a n c e : % actDist = 1; % tau = 2 * actDist / c ; % %%%% f o r t i m e : % t a u = ( 0 / 1 0 0 ) / Fs ; % t h i s makes t h e d e l a y f i x e d t a u = ( ( k − 1 ) / ( I n t e r S a m p P t s − 1 ) ) / Fs ; % t h i s makes t h e d e l a y c h a n g e . . . %e v e r y l o o p ( where e v e r y 100 i t t e r a t i o n we r e a c h t h e s e c o n d d i s c r . ) a c t D i s t = ( t a u * c ) / 2 ; % t o c a l c u l a t e t h e d i s t a n c e from t i m e %% c o n t r o l l i n g t h e n o i s e i n t h e RX s i g n a l s noiseLeveldBW = 0 ; % c u r r e n t l y I am n o t u s i n g t h e noise level n o i s e = wgn ( 1 , l e n g t h ( t ) , noiseLeveldBW ) ; % n o i s e = z e r o s ( 1 , l e n g t h ( t ) ) ; % s e t t i n g n o i s e t o z e r o f o r now %% TX and RX waveforms : uncomment one o f t h e t h r e e o p t i o n s % %%%% S i n e p u l s e : % TxTTsig = ( h e a v i s i d e ( t − 0 ) − h e a v i s i d e ( t − PW − 0 ) ) . * ... % ( exp ( 1 j * 2 * p i * f 1 * ( t − 0 ) + p h i _ i n i t ) ) ; % RxTTsig = ( h e a v i s i d e ( t − t a u ) − h e a v i s i d e ( t − PW − t a u ) ) . * ... % ( exp ( 1 j * 2 * p i * f 1 * ( t − t a u ) + p h i ) ) + n o i s e ; 182 % %%%% Two− t o n e ( TT ) p u l s e : TxTTsig = ( h e a v i s i d e ( t − 0 ) − h e a v i s i d e ( t − PW − 0 ) ) . * ( exp ( 1 j ... * 2 * p i * f 1 * ( t − 0 ) + p h i _ i n i t ) + exp ( 1 j * 2 * p i * f 2 * ( t − 0 ) + p h i _ i n i t ) ) ; RxTTsig = ( h e a v i s i d e ( t − t a u ) − h e a v i s i d e ( t − PW − t a u ) ) . * ( exp ( 1 j . . . * 2 * p i * f 1 * ( t − t a u ) + p h i ) + exp ( 1 j * 2 * p i * f 2 * ( t − t a u ) + p h i ) ) + n o i s e ; % %%%% LFM p u l s e : % TxTTsig = ( h e a v i s i d e ( t − 0 ) − h e a v i s i d e ( t − PW − 0 ) ) . * ( exp ( 1 j * . . . % 2 * p i * ( ( ( ( f 1 − f 2 ) /PW) / 2 ) * ( t − 0 ) . ^ 2 + f 2 * ( t − 0 ) ) + p h i _ i n i t ) ) ; % RxTTsig = ( h e a v i s i d e ( t − t a u ) − h e a v i s i d e ( t − PW − t a u ) ) . * ( exp ( 1 j . . . % * 2 * p i * ( ( ( ( f 1 − f 2 ) /PW) / 2 ) * ( t − t a u ) . ^ 2 + f 2 * ( t − t a u ) ) + p h i ) ) + n o i s e ; % %%%% LFM TT p u l s e : % TxTTsig = ( h e a v i s i d e ( t − 0 ) − h e a v i s i d e ( t − PW − 0 ) ) . * ( exp ( 1 j * . . . % 2 * p i * ( ( ( ( ( f 1 − f 2 ) / 2 ) / PW) / 2 ) * ( t − 0 ) . ^ 2 + f 2 * ( t − 0 ) ) + p h i _ i n i t ) . . . % + exp ( 1 j * 2 * p i * ( ( ( ( ( f 1 − f 2 ) / 2 ) / PW) / 2 ) * ( t − 0 ) . ^ 2 + ( f 2 . . . % + ( f1 − f2 ) / 2 ) * ( t − 0) ) + p h i _ i n i t ) ) ; % RxTTsig = ( h e a v i s i d e ( t − t a u ) − h e a v i s i d e ( t − PW − t a u ) ) . * ( exp ( 1 j . . . % * 2 * p i * ( ( ( ( f 1 − f 2 ) /PW) / 2 ) * ( t − t a u ) . ^ 2 + f 2 * ( t − t a u ) ) + p h i ) + . . . % exp ( 1 j * 2 * p i * ( ( ( ( ( f 1 − f 2 ) / 2 ) / PW) / 2 ) * ( t − t a u ) . ^ 2 + ( f 2 + ( f 1 . . . % − f2 ) / 2 ) * ( t − tau ) ) + phi ) ) + noise ; %% Matched f i l t e r i n g and i n t e r p o l a t i o n C t t = a b s ( x c o r r ( ( RxTTsig ) , ( TxTTsig ) ) ) ; nRes_d = 0 . 0 0 1 ; nCd = i n t e r p 1 ( 1 : numel ( C t t ) , C t t , 1 : nRes_d : numel ( C t t ) , ’ s p l i n e ’ ) ; % %%% h e r e I am c h e a t i n g by s e l e c t i n g t h e main l o b e ( t h i s i s t h e c a s e % when we h a v e a d i s a m b i g u a t i o n p u l s e ) y = [ − r o u n d ( 1 / nRes_d ) : r o u n d ( 1 / nRes_d ) ] ’ ; [ ~ , v ] = max ( nCd ( c e i l ( numel ( nCd ) / 2 ) + r o u n d ( t a u * Fs / nRes_d ) + y ) ) ; m a t c h _ n l _ d = y ( v ) + c e i l ( numel ( nCd ) / 2 ) + r o u n d ( t a u * Fs / nRes_d ) ; 183 % [ ~ , m a t c h _ n l _ d ] = max ( a b s ( nCd ) ) ; % p e a k l o c a t i o n from i n t e r p o l a t i o n : a c t u a l _ m a t c h _ t t = ( m a t c h _ n l _ d − c e i l ( numel ( nCd ) / 2 ) ) * nRes_d ; %% NL−LS L0 = [ ] ; L1 = [ ] ; L2 = [ ] ; % p e a k = r o u n d ( 1 + m a t c h _ n l _ d * nRes_d ) ; % e x t a c t i n g p e a k from % i n t e r p o l a t i o n r e s u l t s ( b e t t e r u s e t h e n e x t command , a s i f we had a % disambiguation pulse x = [ −1: 1 ] ’ ; [ ~ , v ] = max ( C t t ( c e i l ( numel ( C t t ) / 2 ) + r o u n d ( t a u * Fs ) + x ) ) ; p e a k = x ( v ) + c e i l ( numel ( C t t ) / 2 ) + r o u n d ( t a u * Fs ) ; L0 ( 1 ) = C t t ( p e a k ) ; L1 ( 1 ) = 0 . 1 ; L2 ( 1 ) = 2 * a b s ( f2 − f 1 ) / Fs ; y = abs ( C t t ( peak + x ) ) ; for m = 1:40 a r g = L2 (m) . * ( x − L1 (m ) ) ; func1 = s i n c ( arg ) ; f u n c 2 = L0 (m) . * ( s i n c ( a r g ) − c o s ( p i . * a r g ) ) . / ( x − L1 (m ) ) ; f u n c 3 = L0 (m) . * ( c o s ( p i . * a r g ) − s i n c ( a r g ) ) . / L2 (m ) ; J = [ func1 , func2 , func3 ] ; f u n c = L0 (m) . * s i n c ( a r g ) ; Delta_y = y ’ − func ; 184 Delta_L = inv ( J ’ * J ) * J ’ * Delta_y ; L0 (m + 1 ) = L0 (m) + D e l t a _ L ( 1 ) ; L1 (m + 1 ) = min ( 0 . 6 , max ( − 0 . 6 , L1 (m) + D e l t a _ L ( 2 ) ) ) ; L2 (m + 1 ) = L2 (m) + D e l t a _ L ( 3 ) ; % L2 (m + 1 ) = L2 (m ) ; end new_peak = p e a k + L1 (m + 1 ) ; a c t u a l _ m a t c h _ t t _ s i n c = new_peak − c e i l ( numel ( C t t ) / 2 ) ; %% MF−LS d t = r o u n d ( t a u * Fs ) / Fs ; i _ i t t = 5 0 ; % how many MF t o t r y for i = 1: i _ i t t w2 ( i ) = d t − 1 / ( 2 * a b s ( f2 − f 1 ) ) + ( i / i _ i t t ) / ( a b s ( f2 − f 1 ) ) ; t 2 =0 − w2 ( i ) : 1 / Fs : 2 * ( Tb − 1 ) / Fs − w2 ( i ) ; % t i m e v e c t o r ( 2 *PW) RxTTsig2 = ( h e a v i s i d e ( t 2 ) − h e a v i s i d e ( t 2 − PW) ) . * ... ( exp ( 1 j * 2 * p i * f 1 * ( t 2 ) + p h i ) + exp ( 1 j * 2 * p i * f 2 * ( t 2 ) + p h i ) ) ; C t t 2 = a b s ( x c o r r ( ( RxTTsig2 ) , ( TxTTsig ) ) ) ; % [ new_peak_val ( i ) , new_peak_loc ( i ) ] = max ( ( C t t 2 . * C t t ) ) ; [ new_peak_val ( i ) , new_peak_loc ( i ) ] = min ( sum ( ( C t t 2 . / max ( C t t 2 ) . . . − C t t . / max ( C t t ) ) . ^ 2 ) ) ; % f i g u r e ; p l o t ( r e a l ( RxTTsig ) ) ; h o l d on ; p l o t ( r e a l ( RxTTsig2 ) ) ; g r i d on 185 % f i g u r e ; p l o t ( C t t / max ( C t t ) ) ; h o l d on ; p l o t ( C t t 2 / max ( C t t 2 ) ) ; g r i d on end % Methods f o r r e f i n i n g t h e p e a k e s t i m a t i o n f o r MF−LS : % Method 1 : w i t h o u t any f i t t i n g % [ ~ , r r r ] = min ( n e w _ p e a k _ v a l ) ; % d e l a y _ e x _ t _ F ( j j , k ) = w2 ( r r r ) ; % Method 2 : u s i n g i n t e r p o l a t i o n nRes_d2 = 0 . 0 0 1 ; n e w _ p e a k _ v a l 2 = i n t e r p 1 ( 1 : numel ( n e w _ p e a k _ v a l ) , n e w _ p e a k _ v a l , 1 : nRes_d2 . . . : numel ( n e w _ p e a k _ v a l ) , ’ s p l i n e ’ ) ; [ ~ , v2 ] = min ( n e w _ p e a k _ v a l 2 ) ; m a t c h _ n l _ d 2 = ( v2 + 1 / nRes_d2 − 1 ) * nRes_d2 ; d e l a y _ e x _ t _ F ( j j , k ) = d t − 1 / ( 2 * a b s ( f2 − f 1 ) ) + ( m a t c h _ n l _ d 2 / i _ i t t ) . . . / ( a b s ( f2 − f 1 ) ) ; % p e a k l o c a t i o n from i n t e r p o l a t i o n % Method 3 : u s i n g s i n c f i t t i n g L20 = [ ] ; L21 = [ ] ; L22 = [ ] ; % p e a k = r o u n d ( 1 + m a t c h _ n l _ d * nRes_d ) ; % e x t a c t i n g p e a k from % i n t e r p o l a t i o n r e s u l t s ( b e t t e r u s e t h e n e x t command , a s i f we had a % disambiguation pulse ) x2 = [ − 1 : 1 ] ’ ; n e w _ p e a k _ v a l 2 = − n e w _ p e a k _ v a l + 2 * max ( n e w _ p e a k _ v a l ) ; [ p e a k _ v a l , p e a k _ l o c ] = max ( n e w _ p e a k _ v a l 2 ) ; L20 ( 1 ) = p e a k _ v a l ; L21 ( 1 ) = 0 . 1 ; L22 ( 1 ) = 2 * a b s ( f2 − f 1 ) / Fs ; y = ( n e w _ p e a k _ v a l 2 ( p e a k _ l o c + x2 ) ) ; 186 for m = 1:40 a r g = L22 (m) . * ( x2 − L21 (m ) ) ; func1 = s i n c ( arg ) ; f u n c 2 = L20 (m) . * ( s i n c ( a r g ) − c o s ( p i . * a r g ) ) . / ( x2 − L21 (m ) ) ; f u n c 3 = L20 (m) . * ( c o s ( p i . * a r g ) − s i n c ( a r g ) ) . / L22 (m ) ; J = [ func1 , func2 , func3 ] ; f u n c = L20 (m) . * s i n c ( a r g ) ; Delta_y = y ’ − func ; Delta_L = inv ( J ’ * J ) * J ’ * Delta_y ; L20 (m + 1 ) = L20 (m) + D e l t a _ L ( 1 ) ; L21 (m + 1 ) = min ( 1 , max ( − 1 , L21 (m) + D e l t a _ L ( 2 ) ) ) ; L22 (m + 1 ) = L22 (m) + D e l t a _ L ( 3 ) ; % L2 (m + 1 ) = L2 (m ) ; end new_peak2 = p e a k _ l o c + L21 (m + 1 ) ; d e l a y _ e x _ t _ F 2 ( j j , k ) = d t − 1 / ( 2 * a b s ( f2 − f 1 ) ) + ( new_peak2 / i _ i t t ) . . . / ( a b s ( f2 − f 1 ) ) ; %% L o g g i n g t h e r e s u l t s d e l a y _ e x _ t ( j j , k ) = a c t u a l _ m a t c h _ t t / Fs ; delay_ex_r ( jj , k ) = delay_ex_t ( jj , k ) * c / 2; d e l a y _ e x _ r _ s i n c ( j j , k ) = a c t u a l _ m a t c h _ t t _ s i n c / Fs * c / 2 ; delay_ex_r_F ( jj , k ) = delay_ex_t_F ( jj , k ) * c / 2; delay_ex_r_F2 ( jj , k ) = delay_ex_t_F2 ( jj , k ) * c / 2; 187 F1 ( j j , k ) = f 1 ; F2 ( j j , k ) = f 2 ; actDistVec ( jj , k) = actDist ; errors ( jj , k) = delay_ex_r ( jj , k ) − actDist ; errors_sinc ( jj , k) = delay_ex_r_sinc ( jj , k) − actDist ; errors_F ( jj , k ) = delay_ex_r_F ( jj , k ) − a c t D i s t ; errors_F2 ( jj , k ) = delay_ex_r_F2 ( jj , k ) − a c t D i s t ; end end %% F i n a l r e s u l t s % %%% Peak E s t i m a t i o n U s i n g I n t e r p o l a t i o n figure (1) mesh ( F1−F2 , a c t D i s t V e c . * 2 . / c , e r r o r s ) x l a b e l ( ’ B a n d w i d t h ( Hz ) ’ ) y l a b e l ( ’ Time R e c e i v e d ( s ) ’ ) z l a b e l ( ’ O f f s e t (m) ’ ) t i t l e ( ’ TT − Peak E s t i m a t i o n U s i n g I n t e r p o l a t i o n ’ ) % %%% Peak E s t i m a t i o n U s i n g S i n c Method figure (2) mesh ( F1−F2 , a c t D i s t V e c . * 2 . / c , e r r o r s _ s i n c ) x l a b e l ( ’ B a n d w i d t h ( Hz ) ’ ) y l a b e l ( ’ Time R e c e i v e d ( s ) ’ ) z l a b e l ( ’ O f f s e t (m) ’ ) t i t l e ( ’ TT − Peak E s t i m a t i o n U s i n g S i n c Method ’ ) % %%% Peak E s t i m a t i o n U s i n g MF f i t t i n g Method figure (3) mesh ( F1−F2 , a c t D i s t V e c . * 2 . / c , e r r o r s _ F ) zlim ([ −4 ,4]) 188 x l a b e l ( ’ B a n d w i d t h ( Hz ) ’ ) y l a b e l ( ’ Time R e c e i v e d ( s ) ’ ) z l a b e l ( ’ O f f s e t (m) ’ ) t i t l e ( ’ TT − Peak E s t i m a t i o n U s i n g MF f i t t i n g Method ’ ) % %%% Peak E s t i m a t i o n U s i n g MF f i t t i n g Method 2 figure (4) mesh ( F1−F2 , a c t D i s t V e c . * 2 . / c , e r r o r s _ F 2 ) zlim ([ −4 ,4]) x l a b e l ( ’ B a n d w i d t h ( Hz ) ’ ) y l a b e l ( ’ Time R e c e i v e d ( s ) ’ ) z l a b e l ( ’ O f f s e t (m) ’ ) t i t l e ( ’ TT − Peak E s t i m a t i o n U s i n g MF f i t t i n g Method 2 ’ ) 189 APPENDIX D MATLAB SNR ESTIMATION USING EITHER EIGENVALUE DECOMPOSITION OR MATCHED FILTERING % Initialization A1 = 0 . 5 ; A2 = 0 . 5 ; c = 299792458; % m/ s f 1 = 10 e3 ; f 2 = 1010 e3 ; f s = 10 e6 ; T = 0.0012; D = 0.0038; endOfSub = D + T ; noiseStep = 0.01; % n e e d s n r i n s t e a d o f p o s t i n t SNR RdesdB = 0 : 7 0 * n o i s e S t e p : 7 0 ; S d e s = 1 0 . ^ ( RdesdB . / 2 0 ) / 2 ; phi1 = rand ( 1 ) ; phi2 = rand ( 1 ) ; rate = 1/ fs ; j j = 0; ax = 0 . 1 : 0 . 3 : 2 0 ; f o r i i = ax j j = j j + 1; %% TX RX s i g n a l g e n e r a t i o n d e l a y = 0 . 5 *D + r a n d ( 1 ) *D * 0 . 0 5 ; % Signal Generation t _ 0 = 0 : r a t e : endOfSub ; 190 t = t_0 − delay ; % Signals with only r e a l p a r t x1_1_0 = A1 * s i n ( 2 * p i * f 1 * t _ 0 − p h i 1 ) ; x2_1_0 = A2 * s i n ( 2 * p i * f 2 * t _ 0 − p h i 2 ) ; x1_1 = A1 * s i n ( 2 * p i * f 1 * t − p h i 1 ) ; x2_1 = A2 * s i n ( 2 * p i * f 2 * t − p h i 2 ) ; % % S i g n a l s w i t h r e a l and imag p a r t s % x1_0 = A1 * exp ( − 1 i * 2 * p i * f 1 * t _ 0 − p h i 1 ) ; % x2_0 = A2 * exp ( − 1 i * 2 * p i * f 2 * t _ 0 − p h i 2 ) ; % x1 = A1 * exp ( − 1 i * 2 * p i * f 1 * t − p h i 1 ) ; % x2 = A2 * exp ( − 1 i * 2 * p i * f 2 * t − p h i 2 ) ; x1 = x1_1 ; x2 = x2_1 ; x1_0 = x1_1_0 ; x2_0 = x2_1_0 ; x1_0 ( f l o o r ( numel ( x1_0 ) * T / endOfSub ) : end ) = 0 ; x2_0 ( f l o o r ( . . . numel ( x1_0 ) * T / endOfSub ) : end ) = 0 ; [ ~ , z e r o D u e T o D e l a y ] = min ( a b s ( t ) ) ; x1 ( 1 : z e r o D u e T o D e l a y ) = 0 ; x2 ( 1 : z e r o D u e T o D e l a y ) = 0 ; x1 ( z e r o D u e T o D e l a y + f l o o r ( numel ( x1_0 ) * T / endOfSub ) : end ) = 0 ; x2 ( z e r o D u e T o D e l a y + f l o o r ( numel ( x1_0 ) * T / endOfSub ) : end ) = 0 ; n o i s e = i i * ( 1 0 . ^ ( wgn ( 1 , s i z e ( x1 , 2 ) , 0 ) / 1 0 ) − 1 0 . ^ . . . ( wgn ( 1 , s i z e ( x1 , 2 ) , 0 ) / 1 0 ) ) ; x = x1 + x2 + n o i s e ; x_0 = x1_0 + x2_0 ; % A c t u a l SNR snr_m ( j j ) = s n r ( ( x1+x2 ) , n o i s e ) ; %% E i g e n v a l u e D e c o m p o s i t i o n 191 N = s i z e (X , 1 ) ; L = s i z e (X , 2 ) ; Rx = ( 1 / N) * (X’ * X ) ; v = ( e i g ( Rx ) ) ; lambda = s o r t ( v , ’ d e s c e n d ’ ) ; for M = 1:L Tm = p r o d ( lambda (M+ 1 : L ) . ^ ( 1 / ( L−M) ) ) ; Pm = ( 1 / ( L−M) ) * sum ( lambda (M+ 1 : L ) ) ; f o r m u l a (M) = ( − ( L−M) * N* l o g (Tm/ Pm) + 0 . 5 *M* ( 2 * L − M) * l o g (N ) ) ; end [ ~ , locM ] = min ( f o r m u l a ) ; % SNR w i t h o n l y one e i g e n v a l u e r e p r e s e n t i n g t h e s i g n a l power K1 = locM ; Sn1 = mean ( lambda ( K1 + 1 : end ) ) ; Ps1 = ( 1 / L ) * ( sum ( lambda ( 1 : K1 ) ) − K1 * Sn1 ^ 2 ) ; snr_EVD1 ( j j ) = 10 * l o g ( Ps1 / ( Sn1 ^ 2 ) ) ; % SNR w i t h m u l t i p l e e i g e n v a l u e s r e p r e s e n t i n g t h e s i g n a l power K2 = f i n d ( lambda > ( max ( lambda ) − min ( lambda ) ) * 0 . 8 +min ( lambda ) , 1 , ’ l a s t ’ ) ; Sn2 = mean ( lambda ( K2 + 1 : end ) ) ; Ps2 = ( 1 / L ) * ( sum ( lambda ( 1 : K2 ) ) − K2 * Sn2 ^ 2 ) ; snr_EVD2 ( j j ) = 10 * l o g ( Ps2 / ( Sn2 ^ 2 ) ) ; %% SNR e s t i m a t i o n u s i n g t h e Matched F i l t e r O u t p u t % Matched f i l t e r o u t p u t C = a b s ( x c o r r ( x , x_0 ) ) ; [ match_m , m a t c h _ l ] = max ( C ) ; 192 c o r r T o p = max ( a b s ( x c o r r ( x_0 , x_0 ) ) ) ; rmsTop = rms ( x1_0 ( 1 : f l o o r ( numel ( x1_0 ) * T / endOfSub ) − 1 ) + . . . x2_0 ( 1 : f l o o r ( numel ( x1_0 ) * T / endOfSub ) − 1 ) ) * ( match_m / c o r r T o p ) ; d e l a y _ s a m p l e s = a b s ( c e i l ( numel ( C ) / 2 ) − m a t c h _ l ) ; p u l s e L e n g t h = f l o o r ( numel ( x1_0 ) * T / endOfSub ) ; error_margin = pulseLength /20; rms_of_noise = rms ( [ x ( 1 : max ( 2 , d e l a y _ s a m p l e s − e r r o r _ m a r g i n ) ) , ... x ( min ( d e l a y _ s a m p l e s + 2 + p u l s e L e n g t h + e r r o r _ m a r g i n , ... l e n g t h ( x ) − 1 ) : end ) ] ) ; snr_MF = 10 * l o g 1 0 ( ( rmsTop / r m s _ o f _ n o i s e ) ^ 2 ) ; end 193 APPENDIX E MATLAB TIME OF ARRIVAL (TOA) ALGORITHM clear % From : E x a c t and A p p r o x i m a t e Maximum L i k e l i h o o d L o c a l i z a t i o n A l g o r i t h m s % Yiu −Tong Chan , Herman Yau Chin Hang , and Pak − chung Ching RXpos = [ 0 . 5 5 8 8 , 0 , 0 ; 0 , 0 , 0 ; 0 . 5 5 8 8 , 0 . 2 5 4 , 0 ; 0 , 0 . 2 5 4 , 0 ] ; Dist = [0.3069; 0.3069; 0.3069; 0.3069]; DistVAR = [ 0 . 0 2 ; 0 . 0 2 ; 0 . 0 2 ; 0 . 0 2 ] . ^ 2 ; c = 3 e8 ; Q = d i a g ( DistVAR ) ; % c o v a r i a n c e m a t r i x delta_i = Dist ; x _ i = RXpos ( : , 1 ) ; y _ i = RXpos ( : , 2 ) ; k_i = x_i .^2 + y_i . ^ 2 ; x_new = 0 . 2 7 9 4 ; y_new = 0 . 1 2 7 0 ; x_new = 0 ; y_new = 0 ; for i = 1:10 x = x_new ; y = y_new ; r _ i = s q r t ( ( x − x_i ) . ^2 + ( y − y_i ) . ^ 2 ) ; i f i == 1 g_i = x_i ; h_i = y_i ; 194 else g_i = ( x − x_i ) . / ( r _ i .* ( r _ i + d e l t a _ i ) ) ; h_i = ( y − y_i ) . / ( r _ i .* ( r _ i + d e l t a _ i ) ) ; end % s = x ^2 + y ^ 2 ; LHS = 2 . * [ sum ( g _ i . * x _ i ) , sum ( g _ i . * y _ i ) ; sum ( h _ i . * x _ i ) ,... sum ( h _ i . * y _ i ) ] ; % RHS = [ sum ( g _ i . * ( s + k _ i − d e l t a _ i . ^ 2 ) ) ; sum ( h _ i . * ... ( s + k_i − d e l t a _ i . ^ 2 ) ) ] ; % L S s o l u t i o n = i n v ( LHS’ * LHS ) * LHS’ * RHS ; % \ means i n v ( LHS ) * RHS % x_new = L S s o l u t i o n ( 1 ) ; y_new = L S s o l u t i o n ( 2 ) ; RHS_mod = [ sum ( g _ i . * ( 0 + k _ i − d e l t a _ i . ^ 2 ) ) ; sum ( h _ i . * ( 0 + k _ i . . . − delta_i .^2))]; L S s o l u t i o n _ m o d = i n v ( LHS’ * LHS ) * LHS’ * RHS_mod ; RHS_mod_s = [ sum ( g _ i ) ; sum ( h _ i ) ] ; s _ s o m e t h i n g = i n v ( LHS’ * LHS ) * LHS’ * RHS_mod_s ; syms s s xx yy e q n s = [ xx == L S s o l u t i o n _ m o d ( 1 ) + s s * s _ s o m e t h i n g ( 1 ) , yy == . . . L S s o l u t i o n _ m o d ( 2 ) + s s * s _ s o m e t h i n g ( 2 ) , s s == xx ^2 + yy ^ 2 ] ; S = s o l v e ( eqns , [ s s xx yy ] ) ; % x_new = d o u b l e ( S . xx ) ; % y_new = d o u b l e ( S . yy ) ; s_new = d o u b l e ( S . s s ) ; if i s r e a l ( s_new ) i f min ( s_new > 0 ) && l e n g t h ( s_new ) == 2 195 for j = 1:2 s = s_new ( j ) ; RHS = [ sum ( g _ i . * ( s + k _ i − d e l t a _ i . ^ 2 ) ) ; sum ( h _ i . * ... ( s + k_i − d e l t a _ i . ^ 2 ) ) ] ; L S s o l u t i o n = i n v ( LHS’ * LHS ) * LHS’ * RHS ; x_new = L S s o l u t i o n ( 1 ) ; y_new = L S s o l u t i o n ( 2 ) ; r _ j = s q r t ( ( x_new − x _ i ) . ^ 2 + ( y_new − y _ i ) . ^ 2 ) ; J J ( j ) = [ D i s t . / c − r _ j . / c ] ’ * i n v (Q) * [ D i s t . / c − r _ j . / c ] ; end [ u , v ] = min ( J J ) ; s = s_new ( v ) ; else s = max ( s_new ) ; if s < 0 s = max ( a b s ( s_new ) ) ; end end else i f l e n g t h ( s_new ) == 2 if i s r e a l ( s_new (1))&& s_new ( 1 ) > 0 && ~ i s r e a l ( s_new ( 2 ) ) s = s_new ( 1 ) elseif i s r e a l ( s_new (2))&& s_new ( 2 ) > 0 && ~ i s r e a l ( s_new ( 1 ) ) s = s_new ( 2 ) else s = max ( r e a l ( s_new ) ) ; end else s = max ( r e a l ( s_new ) ) ; end 196 if s < 0 s = max ( a b s ( r e a l ( s_new ) ) ) ; end end RHS = [ sum ( g _ i . * ( s + k _ i − d e l t a _ i . ^ 2 ) ) ; sum ( h _ i . * ( s + k _ i − . . . delta_i .^2))]; L S s o l u t i o n = i n v ( LHS’ * LHS ) * LHS’ * RHS ; x_new = L S s o l u t i o n ( 1 ) ; y_new = L S s o l u t i o n ( 2 ) ; r _ j = s q r t ( ( x_new − x _ i ) . ^ 2 + ( y_new − y _ i ) . ^ 2 ) ; J ( i ) = [ D i s t . / c − r _ j . / c ] ’ * i n v (Q) * [ D i s t . / c − r _ j . / c ] ; xxx ( i ) = x_new ; yyy ( i ) = y_new ; end [ xxx ; yyy ; J ] [ uu , vv ] = min ( J ) ; x _ s e l e c t e d = xxx ( vv ) y _ s e l e c t e d = yyy ( vv ) 197 APPENDIX F MATLAB CRAMER-RAO LOWER BOUND FOR TIME OF ARRIVAL (TOA) % close all ; clc clear d f = 4 e6 ; % t o n e s e p a r a t i o n = . . . MHz f _ 0 = 0 e6 ; % c e n t e r f r e q u e n c y = . . . MHz Fs = 2 * d f ; Fs = 20 e6 ; f_1 = f_0 − df / 2 ; f_2 = f_0 + df / 2 ; B = Fs / 2 ; % n o i s e b a n d w i d t h assumed e q u a l t o d f T = 1 e − 4 ; % p u l s e w i d t h = 1 ms SNR_1 = 1 0 ^ ( 4 2 / 1 0 ) ; % 1 0 ^ ( 2 0 / 1 0 ) == 30 dB SNR_2 = 1 0 ^ ( 4 0 . 2 / 1 0 ) ; % 1 0 ^ ( 2 0 / 1 0 ) == 30 dB SNR_3 = 1 0 ^ ( 4 1 . 8 / 1 0 ) ; % 1 0 ^ ( 2 0 / 1 0 ) == 30 dB SNR_4 = 1 0 ^ ( 4 1 / 1 0 ) ; % 1 0 ^ ( 2 0 / 1 0 ) == 30 dB % Time d e l a y e s t i m a t i o n CRLB f o r two − t o n e waveform w i t h t h e a b o v e s e t t i n g s s i g m a _ t 1 = s q r t ( 1 / ( ( ( p i * d f ) ^ 2 + ( 2 * p i * f _ 0 ) ^ 2 ) * ( 2 * B* T * SNR_1 ) ) ) ; s i g m a _ t 2 = s q r t ( 1 / ( ( ( p i * d f ) ^ 2 + ( 2 * p i * f _ 0 ) ^ 2 ) * ( 2 * B* T * SNR_2 ) ) ) ; s i g m a _ t 3 = s q r t ( 1 / ( ( ( p i * d f ) ^ 2 + ( 2 * p i * f _ 0 ) ^ 2 ) * ( 2 * B* T * SNR_3 ) ) ) ; s i g m a _ t 4 = s q r t ( 1 / ( ( ( p i * d f ) ^ 2 + ( 2 * p i * f _ 0 ) ^ 2 ) * ( 2 * B* T * SNR_4 ) ) ) ; c = 3 e8 ; % * c / 2 t o make i t i n m and i n c l u d e two way f a c t o r : sigma_i = [ sigma_t1 *c /2 , sigma_t2 *c /2 , sigma_t3 *c /2 , sigma_t4 *c / 2 ] ; sigma_i = [0.00608 , 0.0061 , 0.00458]; 198 % Multiple case scenarios : % 4 r e c e i v e r s and 1 t r a n s m i t t e r l o c a t i o n s p_i = [+0.3 , + 0 . 3 ; . . . +0.3 , −0.3; . . . −0.3 , +0.3; ... −0.3 , −0.3] ’; % 3 r e c e i v e r s and 1 t r a n s m i t t e r l o c a t i o n s p_i = [ 0.1727 , 0.1918;... −0.2038 , −0.0572; . . . −0.3683 , 0.1918] ’; xxLim = [ −5 5 ] ; yyLim = xxLim ; % xyRes = . 1 ; xyRes = ( xxLim ( 2 ) − xxLim ( 1 ) ) / 2 0 0 ; i i = 0 ; xx = 0 ; yy = 0 ; f o r y = xxLim ( 1 ) : xyRes : xxLim ( 2 ) yy = yy + 1 ; xx = 0 ; f o r x = xxLim ( 1 ) : xyRes : xxLim ( 2 ) xx = xx + 1 ; i i = i i + 1; p = [x; y ]; locations (1 ,1:2 , i i ) = p ; F_11 = F I M _ p a r t s ( p _ i , p , s i g m a _ i , 1 , 1 ) ; F_22 = F I M _ p a r t s ( p _ i , p , s i g m a _ i , 2 , 2 ) ; F_12 = F I M _ p a r t s ( p _ i , p , s i g m a _ i , 1 , 2 ) ; F_21 = F_12 ; FIM ( : , : , i i ) = [ F_11 , F_12 ; . . . F_21 , F_22 ] ; 199 invFIM ( : , : , i i ) = i n v ( FIM ( : , : , i i ) ) ; Xvar ( yy , xx ) = ( invFIM ( 1 , 1 , i i ) ) ; Yvar ( yy , xx ) = ( invFIM ( 2 , 2 , i i ) ) ; Ovar ( yy , xx ) = ( − y . / ( x . ^ 2 + y . ^ 2 ) ) . ^ 2 . * Xvar ( yy , xx ) + . . . ( x . / ( x . ^ 2 + y . ^ 2 ) ) . ^ 2 . * Yvar ( yy , xx ) + . . . 2 . * ( − y . / ( x . ^ 2 + y . ^ 2 ) ) . * ( x . / ( x . ^ 2 + y . ^ 2 ) ) . * ( invFIM ( 1 , 2 , i i ) ) ; Rvar ( yy , xx ) = (x ./ sqrt (x .^2 + y .^2) ).^2 .* Xvar ( yy , xx ) + . . . (y ./ s q r t ( x . ^ 2 + y . ^ 2 ) ) . ^ 2 . * Yvar ( yy , xx ) + 2 . * ( x . / ... sqrt (x .^2 + y .^2) ) .* (y . / s q r t ( x . ^ 2 + y . ^ 2 ) ) . * ( invFIM ( 1 , 2 , i i ) ) ; % O = atan (y / x ) ; O = atan2d (y , x ) ; R = s q r t ( x ^2 + y ^ 2 ) ; Xvar2 ( yy , xx ) = Rvar ( yy , xx ) * c o s (O) ^ 2 + Ovar ( yy , xx ) * R^2 * s i n (O ) ^ 2 ; Yvar2 ( yy , xx ) = Rvar ( yy , xx ) * s i n (O) ^ 2 + Ovar ( yy , xx ) * R^2 * c o s (O ) ^ 2 ; XYcov2 ( yy , xx ) = ( Rvar ( yy , xx ) − Ovar ( yy , xx ) * R^ 2 ) * s i n (O) * c o s (O ) ; end end % i n c a s e we h a v e s i n g u l a r p o i n t s : % Xvar ( i s i n f ( Xvar ) | i s n a n ( Xvar ) ) = 1 0 ; % Yvar ( i s i n f ( Yvar ) | i s n a n ( Yvar ) ) = 1 0 ; % Xvar2 ( i s i n f ( Xvar2 ) | i s n a n ( Xvar2 ) ) = 1 0 ; % Yvar2 ( i s i n f ( Yvar2 ) | i s n a n ( Yvar2 ) ) = 1 0 ; % Ovar ( i s i n f ( Ovar ) | i s n a n ( Ovar ) ) = 1 0 ; 200 %% P l o t t i n g t h e t a and R [X, Y] = m e s h g r i d ( xxLim ( 1 ) : xyRes : xxLim ( 2 ) ) ; h = figure ; A = axes ; mesh (X, Y, s q r t ( ( Ovar ) ) . * 1 8 0 . / p i ) s e t (A, ’ Z S c a l e ’ , ’ l o g ’ ) x l a b e l ( ’ x (m) ’ ) y l a b e l ( ’ y (m) ’ ) t i t l e ( ’ Standard deviation for \ t h e t a ( degree ) ’ ) ; h = figure ; A = axes ; mesh (X, Y, s q r t ( a b s ( Rvar ) ) ) s e t (A, ’ Z S c a l e ’ , ’ l o g ’ ) x l a b e l ( ’ x (m) ’ ) y l a b e l ( ’ y (m) ’ ) t i t l e ( ’ S t a n d a r d d e v i a t i o n f o r d i s t a n c e (m) ’ ) ; %% F i g u r e s figure clims = [0.00 0 . 1 ] ; i m a g e s c ( xxLim , yyLim , s q r t ( a b s ( Xvar ) ) , c l i m s ) % i m a g e s c ( xxLim , yyLim , s q r t ( a b s ( Xvar ) ) ) x l a b e l ( ’ x (m) ’ ) y l a b e l ( ’ y (m) ’ ) h = colorbar ; s e t ( g e t ( h , ’ l a b e l ’ ) , ’ s t r i n g ’ , ’ S t a n d a r d d e v i a t i o n o f x (m) ’ ) ; figure % clims = [0.00001 0 . 0 1 ] ; i m a g e s c ( xxLim , yyLim , s q r t ( a b s ( Yvar ) ) , c l i m s ) x l a b e l ( ’ x (m) ’ ) 201 y l a b e l ( ’ y (m) ’ ) h = colorbar ; s e t ( g e t ( h , ’ l a b e l ’ ) , ’ s t r i n g ’ , ’ S t a n d a r d d e v i a t i o n o f y (m) ’ ) ; figure clims = [0 0 . 1 5 ] ; i m a g e s c ( xxLim , yyLim , s q r t ( a b s ( Xvar2 ) + a b s ( Yvar2 ) ) , c l i m s ) x l a b e l ( ’ x (m) ’ ) y l a b e l ( ’ y (m) ’ ) h = colorbar ; s e t ( g e t ( h , ’ l a b e l ’ ) , ’ s t r i n g ’ , ’ \ s u r d ( CRLB_x + CRLB_y ) (m) ’ ) ; % [X, Y] = m e s h g r i d ( xxLim ( 1 ) : xyRes : xxLim ( 2 ) ) ; h = figure ; A = axes ; mesh (X, Y, s q r t ( a b s ( Xvar ) ) ) s e t (A, ’ Z S c a l e ’ , ’ l o g ’ ) x l a b e l ( ’ x (m) ’ ) y l a b e l ( ’ y (m) ’ ) h = figure ; A = axes ; mesh (X, Y, s q r t ( a b s ( Yvar ) ) ) s e t (A, ’ Z S c a l e ’ , ’ l o g ’ ) x l a b e l ( ’ x (m) ’ ) y l a b e l ( ’ y (m) ’ ) h = figure ; A = axes ; mesh (X, Y, s q r t ( a b s ( Xvar ) + a b s ( Yvar ) ) ) s e t (A, ’ Z S c a l e ’ , ’ l o g ’ ) x l a b e l ( ’ x (m) ’ ) 202 y l a b e l ( ’ y (m) ’ ) t i t l e ( ’ \ s u r d ( s t d _ x ^2 + s t d _ y ^ 2 ) (m) ’ ) ; %%%%%%%%%%%%%%%%%%%%%%%%%% f u n c t i o n F = F I M _ p a r t s ( p _ i , p , s i g m a _ i , xy1 , xy2 ) N = s i z e ( p _ i , 2 ) ; %number o f n o d e s f o r i = 1 :N d ( i ) = norm ( p− p _ i ( : , i ) ) ; a_11 ( i ) = ( 1 / ( s i g m a _ i ( i ) ^ 2 ) ) * ( p ( xy1 ) − p _ i ( xy1 , i ) ) * ( p ( xy2 ) − . . . p _ i ( xy2 , i ) ) / ( d ( i ) ^ 2 ) ; end F = sum ( a_11 ) ; end 203 BIBLIOGRAPHY 204 BIBLIOGRAPHY [1] Serge R Mghabghab and Jeffrey A Nanzer. Localization in distributed wireless systems based on high-accuracy microwave ranging. In 2021 IEEE-APS Topical Conference on Antennas and Propagation in Wireless Communications (APWC), pages 122–125. IEEE, 2021. [2] Patrick Bidigare, Miguel Oyarzyn, David Raeman, Dan Chang, Dave Cousins, Rich O’Donnell, Charlie Obranovich, and D Richard Brown. Implementation and demonstration of receiver-coordinated distributed transmit beamforming across an ad-hoc radio network. In 2012 Conference Record of the Forty Sixth Asilomar Conference on Signals, Systems and Computers (ASILOMAR), pages 222–226. IEEE, 2012. [3] Sean M. Ellison Jeffrey A. Nanzer, Serge R. Mghabghab and Anton Schlegel. Distributed phased arrays: Challenges and recent advances. IEEE Trans. Microw. Theory Techn., 2021. [4] Serge Mghabghab and Jeffrey A Nanzer. Ranging requirements for open-loop coherent distributed arrays with wireless frequency synchronization. In 2020 IEEE USNC-CNC- URSI North American Radio Science Meeting (Joint with AP-S Symposium), pages 69–70. IEEE, 2020. [5] Serge R Mghabghab and Jeffrey A Nanzer. Impact of localization error on open-loop dis- tributed beamforming arrays. In 2021 XXXIIInd General Assembly and Scientific Sympo- sium of the International Union of Radio Science (URSI GASS), pages 1–3. IEEE, 2021. [6] Serge R Mghabghab and Jeffrey A Nanzer. Impact of vco and pll phase noise on distributed beamforming arrays with periodic synchronization. IEEE Access, 9:56578–56588, 2021. [7] Sean M Ellison, Serge R Mghabghab, and Jeffrey A Nanzer. Scalable high-accuracy ranging and wireless frequency synchronization for open-loop distributed phased arrays. In 2020 IEEE 63rd International Midwest Symposium on Circuits and Syst., pages 41–44, August 2020. [8] Serge Mghabghab and Jeffrey A Nanzer. A self-mixing receiver for wireless frequency syn- chronization in coherent distributed arrays. In 2020 IEEE/MTT-S International Microwave Symposium (IMS), pages 1137–1140. IEEE, 2020. [9] Serge Mghabghab, Hassna Ouassal, and Jeffrey A Nanzer. Wireless frequency synchro- nization for coherent distributed antenna arrays. In 2019 IEEE International Symposium on Antennas and Propagation and USNC-URSI Radio Science Meeting, pages 1575–1576. IEEE, 2019. [10] Serge R Mghabghab and Jeffrey A Nanzer. Adaptive internode ranging for coherent distributed antenna arrays. IEEE Transactions on Aerospace and Electronic Systems, 56(6):4689–4697, 2020. 205 [11] Serge Mghabghab and Jeffrey A Nanzer. Microwave ranging via least squares estimation of spectrally sparse signals in software defined radio. IEEE Microwave and Wireless Compo- nents Letters, 2021. [12] Serge R Mghabghab and Jeffrey A Nanzer. High accuracy adaptive microwave ranging using snr-based perception for coherent distributed antenna arrays. IEEE Transactions on Circuits and Systems I: Regular Papers, 67(12):5540–5549, 2020. [13] Serge Mghabghab, Anton Schlegel, Robert Gress, and Jeffrey A Nanzer. Long-range wire- less frequency synchronization for distributed phased arrays. In 2020 IEEE International Symposium on Antennas and Propagation and North American Radio Science Meeting, pages 1253–1254. IEEE, 2020. [14] Serge R Mghabghab and Jeffrey A Nanzer. Open-loop distributed beamforming using wireless frequency synchronization. IEEE Trans. Microw. Theory Techn., 69(1):896–905, September 2020. [15] Serge Mghabghab, Sean M. Ellison, and Jeffrey A Nanzer. Distributed beamforming using wireless phase and frequency synchronization. IEEE Microwave and Wireless Components Letters, 2021. [16] Don Parker and David C Zimmermann. Phased arrays-part 1: theory and architectures. IEEE transactions on microwave theory and techniques, 50(3):678–687, 2002. [17] Don Parker and David C Zimmermann. Phased arrays-part ii: implementations, ap- plications, and future trends. IEEE Transactions on microwave theory and techniques, 50(3):688–698, 2002. [18] Caleb Fulton, Mark Yeary, Daniel Thompson, John Lake, and Adam Mitchell. Digital phased arrays: Challenges and opportunities. Proceedings of the IEEE, 104(3):487–503, 2016. [19] Salvador H Talisa, Kenneth W O’Haver, Thomas M Comberiate, Matthew D Sharp, and Oscar F Somerlock. Benefits of digital phased array radars. Proceedings of the IEEE, 104(3):530–543, 2016. [20] P Benthem and GW Kant. Embrace: Results from an aperture array for radio astronomy. In 2012 6th European Conference on Antennas and Propagation (EUCAP), pages 629–633. IEEE, 2012. [21] AI Zaghloui, O Kilic, LQ Sun, and M Lieberman. Distributed aperture design for low profile earth station antennas. In IEEE Antennas and Propagation Society International Symposium. 1999 Digest. Held in conjunction with: USNC/URSI National Radio Science Meeting (Cat. No. 99CH37010), volume 3, pages 2088–2091. IEEE, 1999. [22] Marco Rossi, Alexander M Haimovich, and Yonina C Eldar. Spatial compressive sensing for mimo radar. IEEE Transactions on Signal Processing, 62(2):419–430, 2013. 206 [23] Sairam Goguri, Dennis Ogbe, Soura Dasgupta, Raghuraman Mudumbai, D Richard Brown, David J Love, and Upamanyu Madhow. Optimal precoder design for distributed transmit beamforming over frequency-selective channels. IEEE Transactions on Wireless Communi- cations, 17(11):7759–7773, 2018. [24] MK Arti and Manav R Bhatnagar. Beamforming and combining in hybrid satellite-terrestrial cooperative systems. IEEE Communications Letters, 18(3):483–486, 2014. [25] Junming Diao, Maziar Hedayati, and Yunxuan Ethan Wang. Experimental demonstration of distributed beamforming on two flying mini-drones. In 2019 United States National Com- mittee of URSI National Radio Science Meeting (USNC-URSI NRSM), pages 1–2. IEEE, 2019. [26] Neil JA Egarguin, David R Jackson, Daniel Onofrei, Julien Leclerc, and Aaron Becker. Adaptive beamforming using scattering from a drone swarm. In 2020 IEEE Texas Sympo- sium on Wireless and Microwave Circuits and Systems (WMCS), pages 1–6. IEEE, 2020. [27] Shangang Fan, Jiangbo Liu, Mengqian Tian, Hao Huang, Fei Dai, Lie Yang, and Guan Gui. Robust adaptive beamforming signal techniques for drone surveillance. In 2018 IEEE 10th Sensor Array and Multichannel Signal Processing Workshop (SAM), pages 582–586. IEEE, 2018. [28] Yoshimi Fujii, Tetsuya Iye, Kensuke Tsuda, and Akihiro Tanibayashi. 28ghz cooperative digital beamforming for 5g advanced system on an sdr platform. In 2021 IEEE Radio and Wireless Symposium (RWS), pages 80–82. IEEE, 2021. [29] Daisuke Kurita, Kiichi Tateishi, Atsushi Harada, Yoshihisa Kishiyama, Shoji Itoh, Hideshi Murai, Arne Simonsson, and Peter Okvist. Indoor and outdoor experiments on 5g radio access using distributed mimo and beamforming in 15 ghz frequency band. In 2016 IEEE Globecom Workshops (GC Wkshps), pages 1–6. IEEE, 2016. [30] Pratik Chatterjee and Jeffrey A Nanzer. Effects of time alignment errors in coherent dis- tributed radar. In 2018 IEEE Radar Conference (RadarConf18), pages 0727–0731. IEEE, 2018. [31] Pilei Yin, Xiaopeng Yang, Tao Zeng, and Xiaona Hu. Robust time synchronization method based on step frequency signal for wideband distributed coherent aperture radar. In 2013 IEEE International Symposium on Phased Array Systems and Technology, pages 383–388. IEEE, 2013. [32] C Shen and H Yu. Time-delay alignment technique for a randomly distributed sensor array. IET communications, 5(8):1068–1072, 2011. [33] Alexander Tashlinskiy and Mikhail Tsarev. An algorithm for time shift estimation of radio pulses received by spatially distributed sensors. In 2015 International Siberian Conference on Control and Communications (SIBCON), pages 1–4. IEEE, 2015. 207 [34] Thomas Musch, Michael Gerding, and Burkhard Schiek. A phase-locked-loop concept for the generation of two rf-signals with a small frequency offset. IEEE transactions on instrumentation and measurement, 54(2):709–712, 2005. [35] Wei-Hao Chiu, Yu-Hsiang Huang, and Tsung-Hsien Lin. A dynamic phase error compen- sation technique for fast-locking phase-locked loops. IEEE Journal of Solid-State Circuits, 45(6):1137–1149, 2010. [36] Raghuraman Mudumbai, Ben Wild, Upamanyu Madhow, and Kannan Ramchandran. Dis- tributed beamforming using 1 bit feedback: from concept to realization. In Proceedings of the 44th Allerton conference on commun., control and comput., volume 8, pages 1020–1027. Citeseer, 2006. [37] Ben Peiffer, Raghu Mudumbai, Sairam Goguri, Anton Kruger, and Soura Dasgupta. Ex- perimental demonstration of retrodirective beamforming from a fully wireless distributed array. In MILCOM 2016-2016 IEEE Military Communications Conference, pages 442–447. IEEE, 2016. [38] Raghuraman Mudumbai, Gwen Barriac, and Upamanyu Madhow. On the feasibility of distributed beamforming in wireless networks. IEEE Transactions on Wireless communica- tions, 6(5):1754–1763, 2007. [39] Robert D Preuss and D Richard Brown III. Two-way synchronization for coordinated multicell retrodirective downlink beamforming. IEEE Transactions on Signal Processing, 59(11):5415–5427, 2011. [40] Omid Abari, Hariharan Rahul, Dina Katabi, and Mondira Pant. Airshare: Distributed coher- ent transmission made seamless. In IEEE Conference on Comput. Commun., pages 1742– 1750, April 2015. [41] Richard K Pooler, Joshua S Sunderlin, R Henry Tillman, and Robert L Schmid. A precise rf time transfer method for coherent distributed system applications. In IEEE USNC-URSI Radio Sci. Meeting (Joint with AP-S Symposium), pages 5–6, July 2018. [42] F. Sivrikaya and B. Yener. Time synchronization in sensor networks: a survey. IEEE Netw., 18(4):45–50, July 2004. [43] Saurabh Ganeriwal, Ram Kumar, and Mani B Srivastava. Timing-sync protocol for sensor networks. In 1st international conference on Embedded netw. sensor syst, pages 138–149, November 2003. [44] Mihail L Sichitiu and Chanchai Veerarittiphan. Simple, accurate time synchronization for wireless sensor networks. In IEEE Wireless Commun. and Netw., volume 2, pages 1266– 1273, May 2003. [45] M. Li, S. Gvozdenovic, A. Ryan, R. David, D. R. Brown, and A. G. Klein. A real-time implementation of precise timestamp-free network synchronization. In 49th Asilomar Con- ference on Signals, Syst. and Comput., pages 1214–1218, November 2015. 208 [46] D Richard Brown and Andrew G Klein. Precise timestamp-free network synchronization. In 47th Annual Conference on Inf. Sci. and Syst., pages 1–6, March 2013. [47] Mitchell WS Overdick, Joseph E Canfield, Andrew G Klein, and D Richard Brown. A software-defined radio implementation of timestamp-free network synchronization. In IEEE International Conference on Acoustics, Speech and Signal Proc., pages 1193–1197, March 2017. [48] J. A. Nanzer, R. L. Schmid, T. M. Comberiate, and J. E. Hodkin. Open-loop coherent distributed arrays. IEEE Trans. Microw. Theory Techn., 65(5):1662–1672, January 2017. [49] Michele Morelli, C-C Jay Kuo, and Man-On Pun. Synchronization techniques for orthogo- nal frequency division multiple access (ofdma): A tutorial review. Proc. IEEE, 95(7):1394– 1427, August 2007. [50] P. Bidigare, M. Oyarzyn, D. Raeman, D. Chang, D. Cousins, R. O’Donnell, C. Obranovich, and D. R. Brown. Implementation and demonstration of receiver-coordinated distributed transmit beamforming across an ad-hoc radio network. In Conference Record of the Forty Sixth Asilomar Conference on Signals, Systems and Computers, pages 222–226, November 2012. [51] I. C. Sezgin, M. Dahlgren, T. Eriksson, M. Coldrey, C. Larsson, J. Gustavsson, and C. Fager. A low-complexity distributed-mimo testbed based on high-speed sigma–delta-over-fiber. IEEE Trans. Microw. Theory Techn., 67(7):2861–2872, April 2019. [52] Munkyo Seo, M. Rodwell, and U. Madhow. A feedback-based distributed phased array tech- nique and its application to 60-ghz wireless sensor network. In IEEE MTT-S International Microwave Symposium Digest, pages 683–686, June 2008. [53] Mabel Pontón and Almudena Suárez. Stability analysis of wireless coupled-oscillator cir- cuits. In IEEE MTT-S International Microwave Symposium, pages 83–86, June 2017. [54] Xuebei Yang, Xuyang Lu, and Aydin Babakhani. Picosecond wireless synchronization using an optically locked voltage controlled oscillator (ol-vco). In IEEE MTT-S International Microwave Symposium, pages 1–4, June 2014. [55] Kun-Yuan Tu and Chia-Shu Liao. Application of anfis for frequency syntonization using gps carrier-phase measurements. In IEEE International Freq. Control Symposium Joint with the 21st European Freq. and Time Forum, pages 933–936, May 2007. [56] R. Mudumbai, G. Barriac, and U. Madhow. On the feasibility of distributed beamforming in wireless networks. IEEE Transactions on Wireless Commun., 6(5):1754–1763, May 2007. [57] D. R. Brown, G. B. Prince, and J. A. McNeill. A method for carrier frequency and phase synchronization of two autonomous cooperative transmitters. In IEEE 6th Workshop on Signal Process. Advances in Wireless Commun., pages 260–264, June 2005. [58] Hassna Ouassal, Torre Rocco, Ming Yan, and Jeffrey A Nanzer. Decentralized frequency synchronization in distributed antenna arrays with quantized frequency states and directed communications. IEEE Trans. Antennas Propag., 68(7):5280–5288, March 2020. 209 [59] D. R. Brown III and H. V. Poor. Time-slotted round-trip carrier synchronization for dis- tributed beamforming. IEEE Trans. Signal Process., 56(11):5630–5643, June 2008. [60] R. D. Preuss and D. R. Brown, III. Two-way synchronization for coordinated multicell retrodirective downlink beamforming. IEEE Trans. Signal Process., 59(11):5415–5427, July 2011. [61] C Pon. Retrodirective array using the heterodyne technique. IEEE Transactions on Antennas and Propagation, 12(2):176–180, 1964. [62] Ioannis Krikidis. Retrodirective large antenna energy beamforming in backscatter multi- user networks. IEEE Wireless Communications Letters, 7(4):678–681, 2018. [63] Robert D Preuss and D Richard Brown. Retrodirective distributed transmit beamforming with two-way source synchronization. In 2010 44th Annual Conference on Information Sciences and Systems (CISS), pages 1–6. IEEE, 2010. [64] Sean M Ellison and Jeffrey A Nanzer. Open-loop distributed beamforming using scalable high accuracy localization. arXiv preprint arXiv:2008.07748, 2020. [65] Hassna Ouassal, Ming Yan, and Jeffrey A Nanzer. Decentralized frequency alignment for collaborative beamforming in distributed phased arrays. IEEE Transactions on Wireless Communications, 2021. [66] Stuart Bennett. Development of the pid controller. IEEE Control Systems Magazine, 13(6):58–62, 1993. [67] John G Ziegler, Nathaniel B Nichols, et al. Optimum settings for automatic controllers. trans. ASME, 64(11), 1942. [68] Davide Dardari, Andrea Conti, Ulric Ferner, Andrea Giorgetti, and Moe Z Win. Ranging with ultrawide bandwidth signals in multipath environments. Proceedings of the IEEE, 97(2):404–426, 2009. [69] Reinhard Feger, Christoph Wagner, Stefan Schuster, Stefan Scheiblhofer, Herbert Jager, and Andreas Stelzer. A 77-ghz fmcw mimo radar based on an sige single-chip transceiver. IEEE Transactions on Microwave theory and Techniques, 57(5):1020–1035, 2009. [70] Cemin Zhang, Michael J Kuhn, Brandon C Merkl, Aly E Fathy, and Mohamed R Mahfouz. Real-time noncoherent uwb positioning radar with millimeter range accuracy: Theory and experiment. IEEE Transactions on Microwave Theory and Techniques, 58(1):9–20, 2009. [71] Andreas Stelzer, Christian G Diskus, Kurt Lubke, and Hartwig W Thim. A microwave position sensor with submillimeter accuracy. IEEE Transactions on Microwave Theory and Techniques, 47(12):2621–2624, 1999. [72] C Meier, A Terzis, and S Lindenmeier. A robust 3d high precision radio location system. In 2007 IEEE/MTT-S International Microwave Symposium, pages 397–400. IEEE, 2007. 210 [73] Adrian N Bishop, Barış Fidan, Brian DO Anderson, Kutluyıl Doğançay, and Pubudu N Pathirana. Optimality analysis of sensor-target localization geometries. Automatica, 46(3):479–492, 2010. [74] Stavros Vakalis and Jeffrey A Nanzer. Millimeter-wave angle estimation of multiple targets using space-time modulation and interferometric antenna arrays. arXiv preprint arXiv:2008.00356, 2020. [75] Jeffrey A Nanzer and Matthew D Sharp. On the estimation of angle rate in radar. IEEE Trans. Antennas Propag., 65(3):1339–1348, December 2016. [76] Sheng Xu and Kutluyil Doğançay. Optimal sensor deployment for 3d aoa target localiza- tion. In 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 2544–2548. IEEE, 2015. [77] Benoit Denis, J-B Pierrot, and Chadi Abou-Rjeily. Joint distributed synchronization and positioning in uwb ad hoc networks using toa. IEEE transactions on microwave theory and techniques, 54(4):1896–1911, 2006. [78] Stefan Galler, Waldemar Gerok, Jens Schroeder, Kyandoghere Kyamakya, and Thomas Kaiser. Combined aoa/toa uwb localization. In 2007 International Symposium on Com- munications and Information Technologies, pages 1049–1053. IEEE, 2007. [79] Mohamed Laaraiedh, Stéphane Avrillon, and Bernard Uguen. A maximum likelihood toa based estimator for localization in heterogeneous networks. Int’l J. of Communications, Network and System Sciences, 3(1):38–42, 2010. [80] Xinya Li, Zhiqun Daniel Deng, Lynn T Rauchenstein, and Thomas J Carlson. Contributed review: Source-localization algorithms and applications using time of arrival and time dif- ference of arrival measurements. Review of Scientific Instruments, 87(4):041502, 2016. [81] Naresh Vankayalapati, Steven Kay, and Quan Ding. Tdoa based direct positioning maxi- mum likelihood estimator and the cramer-rao bound. IEEE transactions on aerospace and electronic systems, 50(3):1616–1635, 2014. [82] Regina Kaune, Julian Hörst, and Wolfgang Koch. Accuracy analysis for tdoa localization in sensor networks. In 14th international conference on information fusion, pages 1–8. IEEE, 2011. [83] Regina Kaune. Accuracy studies for tdoa and toa localization. In 2012 15th International Conference on Information Fusion, pages 408–415. IEEE, 2012. [84] Richard K Martin, Chunpeng Yan, H Howard Fan, and Christopher Rondeau. Algorithms and bounds for distributed tdoa-based positioning using ofdm signals. IEEE transactions on signal processing, 59(3):1255–1268, 2010. [85] Arie Yeredor and Eyal Angel. Joint tdoa and fdoa estimation: A conditional bound and its use for optimally weighted localization. IEEE Transactions on Signal Processing, 59(4):1612–1623, 2010. 211 [86] Dong-Gyu Kim, Geun-Ho Park, Hyoung-Nam Kim, Jin-Oh Park, Young-Mi Park, and Wook-Hyeon Shin. Computationally efficient tdoa/fdoa estimation for unknown commu- nication signals in electronic warfare systems. IEEE Transactions on Aerospace and Elec- tronic Systems, 54(1):77–89, 2017. [87] KC Ho, Xiaoning Lu, and La-or Kovavisaruch. Source localization using tdoa and fdoa measurements in the presence of receiver location errors: Analysis and solution. IEEE Transactions on Signal Processing, 55(2):684–696, 2007. [88] JM Fornies-Marquina, J Letosa, M Garcia-Gracia, and JM Artacho. Error propagation for the transformation of time domain into frequency domain. IEEE Transactions on Magnetics, 33(2):1456–1459, 1997. [89] John E. Freund and Ronald E. Walpole. Mathematical Statistics 4th Ed. Prentice Hall, 1987. [90] Adem Aktas and Mohammed Ismail. CMOS PLLs and VCOs for 4G wireless. Springer Science & Business Media, 2004. [91] David A Howe. Frequency domain stability measurements: a tutorial introduction, vol- ume 13. US Department of Commerce, National Bureau of Standards, Institute for Basic . . . , 1976. [92] B Razavi and RF Microelectronics. Prentice-hall: Englewood cliffs. NJ, USA, 1998. [93] D. B. Leeson. A simple model of feedback oscillator noise spectrum. Proceedings of the IEEE, 54(2):329–330, 1966. [94] Konstantin A Kouznetsov and Robert G Meyer. Phase noise in lc oscillators. IEEE Journal of Solid-State Circuits, 35(8):1244–1248, 2000. [95] Thomas H Lee and Ali Hajimiri. Oscillator phase noise: A tutorial. IEEE journal of solid- state circuits, 35(3):326–336, 2000. [96] Guan-Chyun Hsieh and James C Hung. Phase-locked loop techniques. a survey. IEEE Transactions on industrial electronics, 43(6):609–615, 1996. [97] Cicero S Vaucher. Architectures for RF frequency synthesizers, volume 693. Springer Sci- ence & Business Media, 2006. [98] Shobha N Pawar and Pradeep B Mane. Wide band pll frequency synthesizer: A survey. In 2017 International Conference on Advances in Computing, Communication and Control (ICAC3), pages 1–6. IEEE, 2017. [99] Reza Binaei and Mohammad Gholami. Using d flip-flop with reset terminal to design pfd in qca nanotechnology. International Journal of Electronics, 107(12):1940–1962, 2020. [100] Dean Banerjee. PLL performance, simulation and design. Dog Ear Publishing, 2006. [101] Peter J Ashenden, Gregory D Peterson, and Darrell A Teegarden. The system designer’s guide to VHDL-AMS: analog, mixed-signal, and mixed-technology modeling. Elsevier, 2002. 212 [102] David W Allan. Statistics of atomic frequency standards. Proceedings of the IEEE, 54(2):221–230, 1966. [103] James A Barnes, Andrew R Chi, Leonard S Cutler, Daniel J Healey, David B Leeson, Thomas E McGunigal, James A Mullen, Warren L Smith, Richard L Sydnor, Robert FC Vessot, et al. Characterization of frequency stability. IEEE transactions on instrumentation and measurement, (2):105–120, 1971. [104] Stefano Bregni. Synchronization of digital telecommunications networks, volume 27. Wiley Online Library, 2002. [105] William J Riley. Handbook of frequency stability analysis. 2008. [106] David Clark. Wireless Communications & Networking. The Morgan Kaufmann Series in Networking. Elsevier Science & Technology, 2007. [107] Connor-Winfield-Corporation. TB / TVB Model Series, 2018. [108] Wenzel-Associates-Inc. Wenzel VHF Citrine 501-31631 crystal oscillator, 2018. [109] Z. Liang, G. Zhang, X. Dong, and Y. Huo. Design and analysis of passband transmitted reference pulse cluster uwb systems in the presence of phase noise. IEEE Access, 6:14954– 14965, 2018. [110] Yupeng Fu, Lianming Li, Dongming Wang, and Xuan Wang. A- 193.6 dbc/hz fom t 28.6- to-36.2 ghz dual-core cmos vco for 5g applications. IEEE Access, 8:62191–62196, 2020. [111] Stephen A Maas. Microwave mixers. Norwood, 1986. [112] B Roy Frieden. Science from Fisher information: a unification. Cambridge University Press, 2004. [113] Harry L Van Trees. Detection, estimation, and modulation theory, part I: detection, estima- tion, and linear modulation theory. John Wiley & Sons, 2004. [114] Sean M Ellison, Serge Mghabghab, John J Doroshewitz, and Jeffrey A Nanzer. Combined wireless ranging and frequency transfer for internode coordination in open-loop coherent distributed antenna arrays. IEEE Trans. Microw. Theory Techn., 68(1):277–287, October 2019. [115] J. E. Hodkin, K. S. Zilevu, M. D. Sharp, T. M. Comberiate, S. M. Hendrickson, M. J. Fitch, and J. A. Nanzer. Microwave and millimeter-wave ranging for coherent distributed rf systems. In 2015 IEEE Aerosp. Conference, pages 1–7, March 2015. [116] Samuel Prager, Mark S Haynes, and Mahta Moghaddam. Wireless subnanosecond rf syn- chronization for distributed ultrawideband software-defined radar networks. IEEE Transac- tions on Microwave Theory and Techniques, 68(11):4787–4804, 2020. [117] Richard A Caruana, Roger B Searle, Thomas Heller, and Saul I Shupack. Fast algorithm for the resolution of spectra. Analytical chemistry, 58(6):1162–1167, 1986. 213 [118] Hongwei Guo. A simple algorithm for fitting a gaussian function [dsp tips and tricks]. IEEE Signal Processing Magazine, 28(5):134–137, 2011. [119] Ian Sharp, Kegen Yu, and Y Jay Guo. Peak and leading edge detection for time-of-arrival estimation in band-limited positioning systems. IET communications, 3(10):1616–1627, 2009. [120] Sean M Ellison and Jeffrey A Nanzer. High-accuracy multinode ranging for coherent dis- tributed antenna arrays. IEEE Trans. Aerosp. Electron. Syst., 56(5):4056–4066, April 2020. [121] Botao Ma, Haowen Chen, Bin Sun, and Huaitie Xiao. A joint scheme of antenna selection and power allocation for localization in mimo radar sensor networks. IEEE communications letters, 18(12):2225–2228, 2014. [122] Zhu Han and KJ Ray Liu. Power minimization under constant throughput constraint in wireless networks with beamforming. In Proceedings IEEE 56th Vehicular Technology Conference, volume 1, pages 611–615. IEEE, 2002. [123] Pratik Chatterjee and Jeffrey A Nanzer. A study of coherent gain degradation due to node vibrations in open loop coherent distributed arrays. In 2017 USNC-URSI Radio Science Meeting (Joint with AP-S Symposium), pages 115–116. IEEE, 2017. [124] Karaputugala Madushan Thilina, Mohammad Moghadari, and Ekram Hossain. Generalized spectral footprint minimization for ofdma-based cognitive radio networks. In 2014 IEEE International Conference on Communications (ICC), pages 1657–1662. IEEE, 2014. [125] Karaputugala G Madushan Thilina, Ekram Hossain, and Mohammad Moghadari. Cellular ofdma cognitive radio networks: Generalized spectral footprint minimization. IEEE Trans- actions on Vehicular Technology, 64(7):3190–3204, 2014. [126] Deborah Cohen, Kumar Vijay Mishra, and Yonina C Eldar. Spectrum sharing radar: Coexis- tence via xampling. IEEE Transactions on Aerospace and Electronic Systems, 54(3):1279– 1296, 2017. [127] Hai Deng and Braham Himed. Interference mitigation processing for spectrum-sharing between radar and wireless communications systems. IEEE Transactions on Aerospace and Electronic Systems, 49(3):1911–1919, 2013. [128] Anthony F Martone, Kenneth I Ranney, Kelly Sherbondy, Kyle A Gallagher, and Shannon D Blunt. Spectrum allocation for noncooperative radar coexistence. IEEE Transactions on Aerospace and Electronic Systems, 54(1):90–105, 2017. [129] Tianyao Huang, Yimin Liu, Huadong Meng, and Xiqin Wang. Cognitive random stepped frequency radar with sparse recovery. IEEE Transactions on Aerospace and Electronic Systems, 50(2):858–870, 2014. [130] Simon Haykin. Cognitive radar: a way of the future. IEEE signal processing magazine, 23(1):30–40, 2006. 214 [131] Joseph R Guerci. Cognitive radar: A knowledge-aided fully adaptive approach. In 2010 IEEE Radar Conference, pages 1365–1370. IEEE, 2010. [132] D DeLong and E Hofstetter. On the design of optimum radar waveforms for clutter rejection. IEEE Transactions on Information Theory, 13(3):454–463, 1967. [133] D DeLong and E Hofstetter. The design of clutter-resistant radar waveforms with limited dynamic range. IEEE Transactions on Information Theory, 15(3):376–385, 1969. [134] Antonio De Maio and Alfonso Farina. Cognitive radar signal processing. In Proc. Workshop Math. Issues Inf. Sci., pages 6–18, 2014. [135] Moez Ben Kilani, Yogesh Nijsure, Ghyslain Gagnon, Georges Kaddoum, and François Gagnon. Cognitive waveform and receiver selection mechanism for multistatic radar. IET Radar, Sonar & Navigation, 10(2):417–425, 2016. [136] Gannan Yuan, Xingli Gan, and Wei Zhou. Design a high-precision controller of attitude stabilization in wave-measured radar. In 2007 International Conference on Mechatronics and Automation, pages 3305–3309. IEEE, 2007. [137] Mustafa Yagimli, Ugur Simsir, and Hakan Tozan. Trajectory tracking of a satellite commu- nicated missile with fuzzy and pid control. In Proceedings of 5th International Conference on Recent Advances in Space Technologies-RAST2011, pages 318–323. IEEE, 2011. [138] Murad Yaghi and Mehmet Önder Efe. Fractional order pid control of a radar guided missile under disturbances. In 2018 9th International Conference on Information and Communica- tion Systems (ICICS), pages 238–242. IEEE, 2018. [139] Kiam Heong Ang, Gregory Chong, and Yun Li. Pid control system analysis, design, and technology. IEEE transactions on control systems technology, 13(4):559–576, 2005. [140] Arne Svensson. An introduction to adaptive qam modulation schemes for known and pre- dicted channels. Proceedings of the IEEE, 95(12):2322–2336, 2007. [141] Biren Shah and Sami Hinedi. The split symbol moments snr estimator in narrow-band channels. IEEE transactions on aerospace and electronic systems, 26(5):737–747, 1990. [142] Robert B Kerr. On signal and noise level estimation in a coherent pcm channel. IEEE Transactions on Aerospace and Electronic Systems, (4):450–454, 1966. [143] CE Gilchriest. Signal-to-noise monitoring. JPL Space Programs Summary, 4(37-27):169– 184, 1966. [144] Rolf Matzner. An snr estimation algorithm for complex baseband signals using higher-order statistics. Facta Universitatis (Nis), 1993, 6:41–52, 1993. [145] AL Brandao, Luis B Lopes, and Desmond C McLemon. In-service monitoring of multipath delay and cochannel interference for indoor mobile communication systems. In Proceed- ings of ICC/SUPERCOMM’94-1994 International Conference on Communications, pages 1458–1462. IEEE, 1994. 215 [146] David R Pauluzzi and Norman C Beaulieu. A comparison of snr estimation techniques for the awgn channel. IEEE Transactions on communications, 48(10):1681–1691, 2000. [147] Mohamed Hamid, Niclas Björsell, and Slimane Ben Slimane. Sample covariance matrix eigenvalues based blind snr estimation. In 2014 IEEE International Instrumentation and Measurement Technology Conference (I2MTC) Proceedings, pages 718–722. IEEE, 2014. [148] N Shafiza Mohd Tamim and Farid Ghani. Techniques for optimization in time delay esti- mation from cross correlation function. Int J Eng Technol, 10(2):69–75, 2010. [149] A. R. Jiménez Ruiz and F. Seco Granja. Comparing ubisense, bespoon, and decawave uwb location systems: Indoor performance analysis. IEEE Transactions on Instrumentation and Measurement, 66(8):2106–2117, 2017. [150] Yiu-Tong Chan, H Yau Chin Hang, and Pak-chung Ching. Exact and approximate maximum likelihood localization algorithms. IEEE Transactions on Vehicular Technology, 55(1):10– 16, 2006. 216