APPLICATIONS OF EVOLUTIONARY ALGORITHMS TO ELECTROMAGNETIC MATERIALS CHARACTERIZATION AND DESIGN PROBLEMS By Jonathan Lemoine Frasch A DISSERTATION Submitted to Michigan State University in partial fulfillment of the requirements for the degree of Electrical Engineering – Doctor of Philosophy 2017 ABSTRACT APPLICATIONS OF EVOLUTIONARY ALGORITHMS TO ELECTROMAGNETIC MATERIALS CHARACTERIZATION AND DESIGN PROBLEMS By Jonathan Lemoine Frasch Determining the electrical permittivity and magnetic permeability of materials is an important task in electromagnetics research. The method using reflection and transmission scattering parameters to determine these constants has been widely employed for many years, ever since the work of Nicolson, Ross, and Weir in the 1970’s. For general materials that are homogeneous, linear, and isotropic, the method they developed (the NRW method) works very well and provides an analytical solution. For materials which possess a metal backing or are applied as a coating to a metal surface, it can be difficult or even impossible to obtain a transmission measurement, especially when the coating is thin. In such a circumstance, it is common to resort to a method which uses two reflection type measurements. There are several such methods for free-space measurements, using multiple angles or polarizations for example. For waveguide measurements, obtaining two independent sources of information from which to extract two complex parameters can be a challenge. This dissertation covers three different topics. Two of these involve different techniques to characterize conductor-backed materials, and the third proposes a method for designing synthetic validation standards for use with standard NRW measurements. All three of these topics utilize modal expansions of electric and magnetic fields to analyze propagation in stepped rectangular waveguides. Two of the projects utilize evolutionary algorithms (EA) to design waveguide structures. These algorithms were developed specifically for these projects and utilize fairly recent innovations within the optimization community. The first characterization technique uses two different versions of a single vertical step in the waveguide. Samples to be tested lie inside the steps with the conductor reflection plane behind them. If the two reflection measurements are truly independent it should be possible to recover the values of two complex parameters, but success of the technique ultimately depends upon how independent the measurements actually are. Next, a method is demonstrated for developing synthetic verification standards. These standards are created from combinations of vertical steps formed from a single piece of metal or metal coated plastic. These fully insertable structures mimic some of the measurement characteristics of typical lab specimens and thus provide a useful tool for verifying the proper calibration and function of the experimental setup used for NRW characterization. These standards are designed with the use an EA, which compares possible designs based on the quality of the match with target parameter values. Several examples have been fabricated and tested, and the design specifications and results are presented. Finally, a second characterization technique is considered. This method uses multiple vertical steps to construct an error reducing structure within the waveguide, which allows parameters to be reliably extracted using both reflection and transmission measurements. These structures are designed with an EA, measuring fitness by the reduction of error in the extracted parameters. An additional EA is used to assist in the extraction of the material parameters supplying better initial guesses to a secant method solver. This hybrid approach greatly increases the stability of the solver and increases the speed of parameter extractions. Several designs have been identified and are analyzed. Copyright by JONATHAN LEMOINE FRASCH 2017 To my family, for their support and faith through many long years. v ACKNOWLEDGMENTS I first wish to thank my family and friends. I especially wish to thank my parents. Their love and support made this journey possible. It has been many years since that journey began in 2002 when I first embarked on the adventure known as college. Since those first tentative steps, I have had many mentors and teachers along the way. One mentor in particular, Dr. Ronald Weger, was always available to help me through the difficult concepts that are so fundamental to everything in mathematics, science, and engineering. The faculty in various courses over the years have challenged me in many ways, always with the intent to enlighten and educate me. Through hard work and perseverance, these challenges have made me a better scholar and researcher. Fellow students have shared experience and paved the way for the work I have pursued. With these, I have journeyed not alone but with a great many companions along the way. This journey has not been without interruption. Through a period of personal struggle and self-doubt, it was my faith and my family that gave me the courage to not give up. I owe a debt of gratitude to the faculty of the electromagnetics group here at Michigan State, especially my adviser Dr. Edward Rothwell, who gave me a chance to overcome those doubts and to press on to the next phase of my journey. I offer thanks to my committee for their patience and advice. I also wish to thank the staff of the ECE department here at MSU for their invaluable assistance with fabrication and testing. Many thanks to Virtual EM and the government agencies that have provided me with several great projects to participate in over the years as a Research Assistant. Without this funding I would have had a difficult time pursuing this degree. Even though the projects were not directly related to my dissertation, I learned a lot of the skills and techniques that have allowed me to complete my own work. I also wish to thank Gene Livingston and his employees who provided key manufacturing support to my research. The journey continues, and I hope to find more companions to share it with me. I vi sincerely hope to share with those who follow, the gifts and support that have been so freely offered to me. vii TABLE OF CONTENTS LIST OF TABLES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi LIST OF FIGURES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii CHAPTER 1 Introduction . . . . . . . . . 1.1 History and Motivation . . . . . . . 1.2 Outline of Discussion Topics . . . . 1.3 Summary of Personal Contributions 1.4 Work Remaining . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1 2 4 5 CHAPTER 2 Introduction to Electromagnetic Material Characterization . . . . . . 6 2.1 General Theory of Measurements for Characterization . . . . . . . . . . . . . 6 2.2 Nicolson-Ross-Weir Methodology for Homogeneous, Isotropic, Linear Materials 7 2.2.1 The Modal Field Equations . . . . . . . . . . . . . . . . . . . . . . . 7 2.2.2 Application to a Rectangular Waveguide . . . . . . . . . . . . . . . . 13 2.2.3 Measurement Error . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 2.3 Conductor-Backed Samples . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 CHAPTER 3 Material Characterization Using a Single Stepped Waveguide with a Conductor Reflection Plane . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1 Introduction to the Mode-Matching Technique . . . . . . . . . . . . . . . . . 3.2 Modal Expansion For the Waveguide Transition . . . . . . . . . . . . . . . . 3.2.1 Integrals Evaluated by Type . . . . . . . . . . . . . . . . . . . . . . . 3.2.2 Accelerating the Solution . . . . . . . . . . . . . . . . . . . . . . . . . 3.2.3 Validation of Method for Solving the Forward Type Problem . . . . . 3.3 The Inverse Type Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3.1 Investigation of the Solution Technique for the Inverse Type Problem 3.4 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 21 22 32 47 47 54 56 71 CHAPTER 4 Theoretical Introduction to Multi-Step Waveguides 4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2 Theory for Successive Waveguide Transitions . . . . . . . . 4.3 Application of Recursive Mode-Matching . . . . . . . . . . . . . . 72 72 75 78 CHAPTER 5 Theoretical Background for Evolutionary Optimization and Search Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2 Optimization Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2.1 Survey of Some Traditional Optimization Algorithms . . . . . . . . . 5.2.2 Gradient Method (Steepest Descent) . . . . . . . . . . . . . . . . . . 5.2.3 Golden Section . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2.4 Secant Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3 Basics of Evolutionary Algorithms . . . . . . . . . . . . . . . . . . . . . . . . 79 79 79 81 81 82 83 83 viii . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.4 5.5 5.3.1 Chromosomes . . . . . . . . . . 5.3.2 Fitness . . . . . . . . . . . . . . 5.3.3 Constraints Versus Penalties . . 5.3.4 Driving Evolution of Designs . . 5.3.5 Inheritance and Crossover . . . 5.3.6 Diversity . . . . . . . . . . . . . 5.3.7 Mutation . . . . . . . . . . . . More Advanced Concepts . . . . . . . 5.4.1 Real Number Genetics . . . . . 5.4.2 Modifications to Inheritance . . 5.4.3 Modifications to Mutation . . . 5.4.4 Multiple Dimensions of Fitness 5.4.5 Domination Concept . . . . . . Closing Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83 85 86 86 88 89 90 91 91 91 91 92 93 93 CHAPTER 6 Designing Surrogate Magnetic Materials for Nicolson-Ross-Weir Waveguide Measurement Validation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95 6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95 6.2 Design Methodology and Evolutionary Algorithm . . . . . . . . . . . . . . . 97 6.2.1 Chromosome Structure . . . . . . . . . . . . . . . . . . . . . . . . . . 98 6.2.2 Fitness Measures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99 6.2.3 Selection Process and Inheritance . . . . . . . . . . . . . . . . . . . . 100 6.2.4 Mutation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101 6.3 Fabrication and Testing of a Brass WR-284 Standard . . . . . . . . . . . . . 102 6.4 Fabrication and Testing of a 3-D-printed WR-284 Standard . . . . . . . . . . 111 6.5 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118 CHAPTER 7 Designing Optimized Structures to Benefit the Parameter Extraction of Conductor-Backed Materials . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2 The Extraction Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2.1 The Guess Generating EA . . . . . . . . . . . . . . . . . . . . . . . . 7.2.2 Guess Generating Algorithm, Chromosome Structure . . . . . . . . . 7.2.3 Guess Generating Algorithm, Fitness Measures . . . . . . . . . . . . 7.2.4 Guess Generating Algorithm, Selection and Reproduction . . . . . . . 7.2.5 Guess Generating Algorithm, Mutation . . . . . . . . . . . . . . . . . 7.2.6 The Secant Method as Implemented . . . . . . . . . . . . . . . . . . 7.3 The Design Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3.1 Design Algorithm, Chromosome Structure . . . . . . . . . . . . . . . 7.3.2 Design Algorithm, Fitness Measures . . . . . . . . . . . . . . . . . . . 7.3.3 Design Algorithm, Selection and Reproduction . . . . . . . . . . . . . 7.3.4 Additional Comments About the Design Algorithm . . . . . . . . . . 7.4 Evaluating Prospective Designs . . . . . . . . . . . . . . . . . . . . . . . . . 7.4.1 Baseline Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.4.2 Test Methodology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix 119 119 120 121 121 121 122 122 123 125 125 126 127 128 129 129 129 7.5 7.6 7.4.3 EA Results . . . 7.4.4 EA Results When 7.4.5 EA Results When 7.4.6 EA Results When Comparison of Designs . Conclusions . . . . . . . . . . . . . . . . . Optimizing With Optimizing With Optimizing With . . . . . . . . . . . . . . . . . . . . . . . . . . . Teflon . . . MagMat1 . VeroWhite . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130 131 135 139 144 146 APPENDIX . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147 BIBLIOGRAPHY . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159 x LIST OF TABLES Table 3.1 Summary of modal forms. . . . . . . . . . . . . . . . . . . . . . . . . . 31 Table 3.2 Cases for Cpn . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 Table 3.3 Results of cases for Cpn . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 Table 3.4 Results of cases for Dpn . . . . . . . . . . . . . . . . . . . . . . . . . . 36 Table 3.5 Continued results of cases for Dpn . . . . . . . . . . . . . . . . . . . . . 37 Table 3.6 Continued results of cases for Dpn . . . . . . . . . . . . . . . . . . . . . 38 Table 3.7 Continued results of cases for Dpn . . . . . . . . . . . . . . . . . . . . . 39 Table 3.8 Results of cases for Fpn . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 Table 3.9 Results of cases for Epn . . . . . . . . . . . . . . . . . . . . . . . . . . 43 Table 3.10 Continued results of cases for Epn . . . . . . . . . . . . . . . . . . . . . 44 Table 3.11 Continued results of cases for Epn . . . . . . . . . . . . . . . . . . . . . 45 Table 3.12 Continued results of cases for Epn . . . . . . . . . . . . . . . . . . . . . 46 Table 3.13 Geometry and parameters for validation with HFSS. . . . . . . . . . . 49 Table 3.14 Validation with HFSS, results. . . . . . . . . . . . . . . . . . . . . . . 50 Table 3.15 Code results versus number of modes, Teflon. . . . . . . . . . . . . . . 51 Table 3.16 Full height comparison to theory, 10.4 GHz, FGM-125. . . . . . . . . . 54 Table 3.17 Noiseless data, two step sizes, 10.4 GHz, 500 modes. . . . . . . . . . . 56 Table 6.1 Dimensions of the Tested WR 284 Standard. . . . . . . . . . . . . . . 103 Table 7.1 Results for the baseline design: average of the standard deviation. . . . 130 Table 7.2 Dimensions and results for the Teflon design: average of the standard deviation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135 Table 7.3 Dimensions and results for the MagMat1 design: average of the standard deviation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139 xi Table 7.4 Dimensions and results for the VeroWhite design: average of the standard deviation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143 Table 7.5 Comparison of Designs: average of the standard deviation. . . . . . . . 145 xii LIST OF FIGURES Figure 2.1 Side view of rectangular waveguide with sample . . . . . . . . . . . 13 Figure 3.1 Rectangular waveguide with single step and sample. . . . . . . . . . 25 Figure 3.2 Difference in magnitude of S11 between n and n-1 modes, Teflon. . . 52 Figure 3.3 Difference in magnitude of S11 between n and n-1 modes, FGM-125. 53 Figure 3.4 Result of extracting parameters while varying the second step height used: Teflon, 9 GHz, no noise. . . . . . . . . . . . . . . . . . . . . . 57 Result of extracting parameters while varying the second step height used: Teflon, 10 GHz, no noise. . . . . . . . . . . . . . . . . . . . . . 58 Result of extracting parameters while varying the second step height used: FGM-125, 9 GHz, no noise. . . . . . . . . . . . . . . . . . . . 59 Result of extracting parameters while varying the second step height used: FGM-125, 10 GHz, no noise. . . . . . . . . . . . . . . . . . . . 60 Result of extracting parameters while varying the second step height used: Teflon, 9 GHz, full noise. . . . . . . . . . . . . . . . . . . . . . 62 Result of extracting parameters while varying the second step height used: Teflon, 10 GHz, full noise. . . . . . . . . . . . . . . . . . . . . 63 Result of extracting parameters while varying the second step height used: FGM-125, 9 GHz, full noise. . . . . . . . . . . . . . . . . . . . 64 Result of extracting parameters while varying the second step height used: FGM-125, 10 GHz, full noise. . . . . . . . . . . . . . . . . . . 65 Result of extracting parameters while varying the second step height used: Teflon, 9 GHz, 1/100 noise. . . . . . . . . . . . . . . . . . . . 67 Result of extracting parameters while varying the second step height used: Teflon, 10 GHz, 1/100 noise. . . . . . . . . . . . . . . . . . . . 68 Result of extracting parameters while varying the second step height used: FGM-125, 9 GHz, 1/100 noise. . . . . . . . . . . . . . . . . . . 69 Result of extracting parameters while varying the second step height used: FGM-125, 10 GHz, 1/100 noise. . . . . . . . . . . . . . . . . . 70 Figure 3.5 Figure 3.6 Figure 3.7 Figure 3.8 Figure 3.9 Figure 3.10 Figure 3.11 Figure 3.12 Figure 3.13 Figure 3.14 Figure 3.15 xiii Figure 4.1 Side view of rectangular waveguide with multiple dimension transitions. 72 Figure 4.2 Side view of rectangular waveguide with two dimensional transitions. Figure 6.1 Brass WR-284 verification standard. . . . . . . . . . . . . . . . . . . 104 Figure 6.2 WR-284 waveguide sample holder with verification standard in place. 105 Figure 6.3 Relative permittivity of the brass WR-284 verification standard. . . 107 Figure 6.4 Measurement error and standard deviation of the measured permittivity for the brass WR-284 verification standard. . . . . . . . . . . 108 Figure 6.5 Relative permeability of the brass WR-284 verification standard. . . 109 Figure 6.6 Measurement error and standard deviation of the measured permeability for the brass WR-284 verification standard. . . . . . . . . . . 110 Figure 6.7 3-D printed WR-284 verification standard with copper electroplating. 112 Figure 6.8 Relative permittivity of the 3-D-printed WR-284 verification standard.114 Figure 6.9 Measurement error and standard deviation of the measured permittivity for the 3-D-printed WR-284 verification standard. . . . . . . . 115 Figure 6.10 Relative permeability of the 3-D-printed WR-284 verification standard.116 Figure 6.11 Measurement error and standard deviation of the measured permeability for the 3-D-printed WR-284 verification standard. . . . . . . 117 Figure 7.1 Teflon optimal front progression, after 20 generations. . . . . . . . . 132 Figure 7.2 Teflon optimal front with general population, after 20 generations. . 133 Figure 7.3 Teflon optimal design, rough sketch. . . . . . . . . . . . . . . . . . . 134 Figure 7.4 MagMat1 optimal front with general population, after 20 generations. 136 Figure 7.5 MagMat1 optimal front progression, after 20 generations. . . . . . . 137 Figure 7.6 MagMat1 optimal design, rough sketch. . . . . . . . . . . . . . . . . 138 Figure 7.7 VeroWhite optimal front progression, after 20 generations. . . . . . . 140 Figure 7.8 VeroWhite optimal front with general population, after 20 generations.141 Figure 7.9 VeroWhite optimal design, rough sketch. . . . . . . . . . . . . . . . 142 73 Figure A.10 Base portion of the 3D printed structure. . . . . . . . . . . . . . . . 149 xiv Figure A.11 Top portion of the 3D printed structure. . . . . . . . . . . . . . . . . 150 Figure A.12 Combined portions of the 3D printed structure. . . . . . . . . . . . . 151 Figure A.13 Hole for insertion of sample into the 3D printed structure. . . . . . . 153 Figure A.14 Gaps on the sides of the 3D printed structure. . . . . . . . . . . . . 154 Figure A.15 Gaps on the sides of the 3D printed structure after being bolted to the standard waveguides. . . . . . . . . . . . . . . . . . . . . . . . . 155 Figure A.16 Taping the sides of the 3D printed structure. . . . . . . . . . . . . . 157 xv CHAPTER 1 Introduction 1.1 History and Motivation Determining the electrical permittivity ǫ and magnetic permeability µ of materials is an important task in electromagnetics research. The method using reflection and transmission scattering parameters to determine these constants has been widely employed for many years, ever since the work of Nicolson, Ross, and Weir in the 1970’s. Their work is provided in [1] and [2]. For general materials that are homogeneous, linear, and isotropic, the method they developed (the NRW method) works very well and provides an analytical solution. For materials which possess a metal backing or are applied as a coating to a metal surface, it can be difficult or even impossible to obtain a transmission measurement, especially when the coating is thin. In such a circumstance, it is common to resort to a method which uses two reflection type measurements. There are several such methods for free-space measurements, using multiple angles or polarizations for example. For waveguide measurements, obtaining two independent sources of information from which to extract two complex parameters can be a challenge. A survey of several such techniques is given in [3] Several projects were recently completed using the concepts of waveguide transitions to provide different reflection only measurements in waveguides for anisotropic materials. These 1 topics are discussed in detail in [4]. For homogeneous, isotropic, conductor-backed samples, the use of an iris was considered. For bi-axial and gyromagnetic materials the use of reduced aperture waveguides was considered. These reductions were analyzed with mode-matching and provided a variety of useful features. Two of these benefits were the ability to analyze a single sample in three different orientations rather than using three different samples, and a reduction of the sample sizes required. Also discussed in [4] was the fabrication of a surrogate waveguide standard formed by a number of waveguide apertures. An evolutionary algorithm was used to determine the best spacing and size of the apertures to simulate the desired material properties. While good results were obtained in this work, the properties synthesized were not as frequency independent as desired and the standards were not insertable. Not being inserted prevents them from fully replicating some of the error inducing defects encountered when characterizing typical samples in a rectangular waveguide, such as air gaps or misalignment of the sample. Based on these previous works, a series of topics was identified to be the content of this dissertation. The first topic is a single stepped waveguide to characterize homogeneous, isotropic, conductor-backed samples. The next topic is applying more sophisticated evolutionary algorithms capable of designing more complex, fully insertable structures that can serve as waveguide verification standards. Such standards would also be designed with less frequency dependence for permittivity if possible. Lastly, utilizing these more advanced evolutionary algorithms to provide a way to extract the permittivity and permeability of conductor-backed samples in rectangular waveguides using both reflection and transmission measurements while at the same time reducing the error in these estimated parameters. 1.2 Outline of Discussion Topics This dissertation covers three topics. Two of these involve different techniques to characterize conductor-backed materials, and the third proposes a method for designing synthetic 2 validation standards for use with standard NRW measurements. All three of these topics utilize modal expansions of electric and magnetic fields to analyze propagation in stepped rectangular waveguides. Two of the projects utilize evolutionary algorithms (EA) to design waveguide structures. These algorithms were developed specifically for these projects and utilize fairly recent innovations within the optimization community. A review of the theory behind the Nicolson Ross Weir technique is provided in Chapter 2 of this dissertation. This is the golden standard for material characterization and is frequently used. It is prominent because it provides a closed form solution for the electromagnetic material parameters given two measured scattering parameters. The techniques developed for this dissertation must be viewed in the context of the existing approach. Since these techniques rely on numerical solutions, it is desirable that they be at least as reliable as the NRW method if possible. However, in a situation where NRW cannot be used, some increase in the error of the extraction may be acceptable. One such situation is a conductor-backed sample. The characterization technique considered in Chapter 3 of this dissertation uses two different versions of a single vertical step in the waveguide. Samples to be tested lie inside the steps with the conductor reflection plane behind them. The full analysis of the modal expansions is provided in this dissertation, along with several numerical experiments. If the two reflection measurements are truly independent it should be possible to recover the values of two complex parameters, but success of the technique ultimately depends upon how independent the measurements actually are. Ultimately this technique was found to be unreliable. Chapter 4 provides a brief discussion of the recursive approach to analyzing multi-step rectangular waveguides. Chapter 5 provides a brief review of the concepts involved in developing evolutionary algorithms. Next in Chapter 6, a method is demonstrated for developing synthetic verification standards . These standards are created from combinations of vertical steps formed from a 3 single piece of metal or metal coated plastic. These fully insertable structures mimic some of the measurement characteristics of typical lab specimens and thus provide a useful tool for verifying the proper calibration and function of the experimental setup used for NRW characterization. These standards are designed with the use an EA, which compares possible designs based on the quality of the match with target parameter values. Several examples have been fabricated and tested, and the design specifications and results are presented. Finally, a second characterization technique is considered in Chapter 7. This method uses multiple vertical steps to construct an error reducing structure within the waveguide, which allows parameters to be reliably extracted using both reflection and transmission measurements. These structures are designed with an EA, measuring fitness by the reduction of error in the extracted parameters. An additional EA is used to assist in the extraction of the material parameters supplying better initial guesses to a secant method solver. This hybrid approach greatly increases the stability of the solver and increases the speed of parameter extractions. Several designs have been identified and are analyzed. 1.3 Summary of Personal Contributions Beginning with my Qualifying Exam Part B (which covered the topic discussed in Chapter 3), I worked on a series of projects involving electromagnetic characterization of materials. Using the FORTRAN code which computes the scattering parameters from a sequence of stepped, rectangular waveguide sections and provided by Dr. Rothwell, I developed a series of evolutionary algorithm implementations used to accomplish the tasks discussed in Chapter 6 and Chapter 7. Specifically, this included an algorithm to design a surrogate, magnetic waveguide standard material, an algorithm to extract the permittivity and permeability of a conductor-backed, homogeneous, isotropic material resting within a sequence of waveguide transitions, and an algorithm to design structures that reduce the error in such extractions as much as possible. I also arranged for the fabrication and measurement of a selection of 4 the waveguide standards proposed. 1.4 Work Remaining Fabrication and testing of the structures to reduce error in the characterization of conductor backed samples is underway; however, the initial attempts have met with fabrication issues. Attempts to use 3D printing and metal plating have shown significant challenges to obtaining a properly functioning structure. These include incomplete metallization, significant bowing and distortion of the structure’s dimensions, and difficulties with the assembly of the structure (likely due to the distortion of the dimensions). To overcome these challenges, two approaches are indicated. First the part could be assembled from solid pieces of metal. This would be expensive, but it should be possible to achieve much better fabrication quality. Second the part could be assembled from smaller pieces, for which it will be easier to control the dimensions and metallization. 5 CHAPTER 2 Introduction to Electromagnetic Material Characterization 2.1 General Theory of Measurements for Characterization Determination of the material properties of an unknown material is generally done through comparing its response to certain stimuli with the responses of known standards. Often physical properties can be represented by a linear model; using this approach requires that one independent source of information is needed for each parameter sought. When two or more parameters are closely related, the specific contributions of each may be difficult to discern. Additionally, many higher order effects may be due to combinations of the parameters. Electrical permittivity, ǫ, and permeability, µ, are generally understood to reflect the interplay between coupled electric and magnetic fields. These are reflected most succinctly through the Maxwell equations, shown here in phasor form for a source-free medium along with the relevant constitutive relations: 6 2.2 ¯ ∇ × E¯ = −j ωµH (2.1) ¯ = j ωǫE¯ ∇×H (2.2) ∇ · E¯ = 0 (2.3) ¯ =0 ∇·H (2.4) ¯ = ǫE¯ D (2.5) ¯ = µH. ¯ B (2.6) Nicolson-Ross-Weir Methodology for Homogeneous, Isotropic, Linear Materials 2.2.1 The Modal Field Equations Since the electric and magnetic fields are represented as complex vector fields, ǫ and µ are understood to be complex parameters, and it is necessary to identify two independent, complex relationships in order to determine them both. Many techniques exist and one of them is the Nicolson-Ross-Weir Technique, NRW. This technique is largely derived from two key papers: ‘Measurement of the Intrinsic Properties of Materials by Time-Domain Techniques’ [1] and ‘Automatic Measurement of Complex Dielectric Constant and Permeability at Microwave Frequencies’ [2] Taken together, these papers outline an analytical method for extracting both ǫ and µ for homogeneous, source-free, isotropic, and linear materials. This is often utilized in rectangular waveguide measurements, which provide reflection and transmission measurements simultaneously by using a two-port network analyzer. To satisfy the theory of NRW, the samples must completely fill the cross-section of the waveguide with no air gaps or misalign- 7 ment. Also, enough line length is required to ensure that only one single, T E10 mode is propagating into the material under test. The derivation begins with Maxwell’s equations for source-free, homogeneous, linear, isotropic regions, with no free electrical charges. Taking the curl of Equation (2.1) gives ¯ ∇ × ∇ × E¯ = −j ωµ∇ × H. (2.7) Substituting from Equation (2.2) gives ¯ ∇ × ∇ × E¯ = ω 2 µǫE, (2.8) ∇ × ∇ × E¯ − ω 2 µǫE¯ = 0. (2.9) ∇ × ∇ × E¯ = ∇ ∇ · E¯ − ∇2 E¯ (2.10) which reduces to Using the vector identity and substituting from Equation (2.9) then gives ∇2 E¯ + ω 2 µǫE¯ = 0. (2.11) By a similar process an equivalent expression to Equation (2.11) may be found for the magnetic field: ¯ + ω 2 µǫH ¯ = 0. ∇2 H (2.12) k 2 = ω 2 µǫ, (2.13) Defining the wave number, k, by Equation (2.11) and Equation (2.12) can be rewritten as 8 ∇2 E¯ + k 2 E¯ = 0 (2.14) ¯ + k2H ¯ = 0. ∇2 H (2.15) In order to develop expressions for fields within the rectangular waveguide some knowledge of the behavior of the fields within the guide must be applied. It is expected that the fields will form a standing wave pattern in the transverse directions, while a traveling wave behavior will exist for the longitudinal direction of the guide. There will be forward and reverse versions of these waves, and they may be superposed to form a complete solution. To get these expressions, several equations must be decomposed into longitudinal and transverse components. Decompositions for two important operators are ∇ = ∇t + zˆ ∂ ∂z ∂2 , ∇2 = ∇2t + ∂z 2 (2.16) (2.17) 2 2 ¯ =H ¯ − zˆHz . Separating Where ∇t = xˆ ∂ + yˆ ∂ , ∇2t = ∂ 2 + ∂ 2 , E¯t = E¯ − zˆEz , and H t ∂x ∂y ∂x ∂y equations (2.1) and (2.2) by component gives ∇t × E¯t = −j ωµˆ z Hz zˆ ∂ ¯ × E¯t + ∇t × zˆEz = −j ωµH t ∂z ¯ = j ωǫˆ ∇t × H z Ez t zˆ ∂ ¯ + ∇ × zˆHz = j ωǫE¯ . ×H t t t ∂z (2.18) (2.19) (2.20) (2.21) Taking the transverse curl of Equation (2.18), multiplying Equation (2.21) by j ωǫ, and using elimination yields an equation for the transverse component of the electric field in ¯ A similar process yields the same form terms of the longitudinal components of E¯ and H. 9 for the magnetic field: ∂2 + k2 2 ∂z ∂ E¯t = ∇t Ez − j ωµ∇t × zˆHz ∂z (2.22) ∂2 + k2 ∂z 2 ¯ = ∇ ∂ Hz + j ωǫ∇ × zˆEz . H t t ∂z t (2.23) The fields, already divided into forward and backward traveling waves, may be further divided into modes that are TE and TM to the longitudinal direction of the waveguide, that is, transverse in electric field or transverse in magnetic field respectively. This means that the full field expression may be constructed as a superposition of fields with no z component ¯ To get the TE modes of the forward traveling wave of E¯ or fields with no z component of H. assume that Ez = 0. (2.24) Solution by separation of variables of the zˆ component of Equation (2.15) gives + (a cos (k x) + a sin (k x)) a cos k y + a sin k y Hz = Hmn x x y y 1 2 3 4 e−j βmn z , (2.25) where βmn = kz for the mode m, n. Making substitutions into Equation (2.22) and Equation (2.23) gives 2 + k 2 E¯ = −j ωµ∇ × zˆH −βmn z t t (2.26) 2 + k2 H ¯ = ∇ ∂ Hz . −βmn t t ∂z (2.27) These equations may then be further separated into rectangular components as 2 + k 2 E = −j ωµ ∂ H −βmn z x ∂y 10 (2.28) + = Letting Emn + j ωµHmn 2 +k 2 −βmn 2 + k 2 E = −j ωµ −∂ H −βmn y z ∂x (2.29) 2 2 + k2 H = ∂ H −βmn x z ∂x∂z (2.30) 2 2 + k2 H = ∂ H . −βmn z y ∂y∂z (2.31) the equations simplify to Ez = 0 + Hz = Emn (2.32) 2 + k2 −βmn (a1 cos (kx x) + a2 sin (kx x)) j ωµ a3 cos ky y + a4 sin ky y Ex = − e−j βmn z ∂ + E (a cos (kx x) + a2 sin (kx x)) ∂y mn 1 a3 cos ky y + a4 sin ky y ∂ + E (a cos (kx x) + a2 sin (kx x)) ∂x mn 1 a3 cos ky y + a4 sin ky y e−j βmn z + ∂ 2 Emn (a cos (kx x) + a2 sin (kx x)) ∂x∂z j ωµ 1 a3 cos ky y + a4 sin ky y (2.35) (2.36) e−j βmn z + ∂ 2 Emn (a cos (kx x) + a2 sin (kx x)) Hy = ∂y∂z j ωµ 1 a3 cos ky y + a4 sin ky y (2.34) e−j βmn z Ey = Hx = (2.33) (2.37) e−j βmn z . Next apply the boundary conditions for the waveguide. Ex = 0 at y = 0, b and Ey = 0 nπ at x = 0, a. Thus a1 = 1, a2 = 0, a3 = 1, a4 = 0, kx = mπ a , and ky = b . The resulting equations are as follows: Ez = 0 11 (2.38) + Hz = Emn 2 + k2 −βmn j ωµ mπ x a cos + ∂ cos mπ x Ex = −Emn ∂y a cos cos + ∂ cos mπ x Ey = Emn ∂x a cos nπ y b nπ y b nπ y b e−j βmn z e−j βmn z e−j βmn z (2.39) (2.40) (2.41) Hx = + ∂2 Emn mπ x cos j ωµ ∂x∂z a cos nπ y b e−j βmn z (2.42) Hy = + ∂2 Emn mπ x cos j ωµ ∂y∂z a cos nπ y b e−j βmn z . (2.43) A similar process is used for the reverse traveling wave. Decomposition of either Equation (2.14) or (2.15) for the z component will yield an expression showing the relationship among the wave numbers, 2 + k2 + β 2 = k2. kx y mn (2.44) j ωµ . The results are As an example consider the T E10 mode, with Z10 = β10 Ez = 0 2 + k2 −βmn π Hz = cos x j ωµ a + e−j β10 z + E − e+j β10 z E10 10 Ex = 0 Ey = −π π sin x a a π π Hx = sin x a a (2.45) (2.46) (2.47) + e−j β10 z + E − e+j β10 z E10 10 (2.48) − + E10 E10 −j β z 10 − e e+j β10 z Z10 Z10 (2.49) Hy = 0 12 (2.50) 2.2.2 Application to a Rectangular Waveguide The test material fills only a portion of the length of the waveguide. This defines three distinct regions in the waveguide, each with different material properties. See Figure 2.1 for a basic illustration. Starting in order from the Port 1 side of the network analyzer, the Figure 2.1 Side view of rectangular waveguide with sample region before the material is Region I where z < 0, the material under test is Region II where 0 < z < l, and the region after the material is Region III where z > l. Regions I and III are usually filled with air. Boundary conditions on the transverse components of electric and magnetic fields between two dielectrics, require that the total fields match on both front and back sides of the material. These four conditions are 13 E¯t,I = E¯t,II at z=0 (2.51) ¯ ¯ H t,I = Ht,II at z=0 (2.52) E¯t,II = E¯t,III at z=l (2.53) ¯ ¯ H t,II = Ht,III at z=l . (2.54) The modes are all trigonometric forms and may be summed together to construct the complete field expressions. By multiplying these boundary equations by a specific modal form, and integrating across the entire boundary, it is possible to construct an equation in terms of a single mode. Since only the T E10 mode propagates in the rectangular waveguide within the specified frequency band, the system of equations becomes: + e−j βII z + E − e+j βII z at z = 0 EI+ e−j βI z + EI− e+j βI z = EII II (2.55) − + EI+ −j β z EI− +j β z EII EII −j β z I − I = II − e e e e+j βII z at z = 0 ZI ZI ZII ZII (2.56) + e−j βII z + E − e+j βII z = E + e−j βIII z at z = l EII III II (2.57) − + + EII EII EIII −j β z +j β z II II e − e = e−j βIII z at z = l . ZII ZII ZIII (2.58) If Region I and Region III have the same properties, the same wave behavior is expected in both regions. Further, the incoming signal is presumably known, thus it is preferred to consider the ratios of the other parameters to the incoming signal. The definitions EI− S11 = + EI + e−j βI l EIII S21 = EI+ 14 P = e−j βII l Γ= ZII − ZI ZII + ZI + EII a= + EI − EII b= + EI leave: (2.59) 1 + S11 = a + b at z = 0 1 − S11 = aP + 1−Γ 1+Γ 1−Γ (a − b) at z = 0 1+Γ (2.60) b = S21 at z = l P (2.61) aP − b P = S21 at z = l (2.62) Using elimination on Equations (2.59) and (2.60) the following expressions for a and b may be found: 1 − ΓS11 1−Γ (2.63) S −Γ . b = 11 1−Γ (2.64) a= This leaves a system of two equations in two unknowns. These equations are P2 1−Γ 1+Γ 1 − ΓS11 S11 − Γ + = S21 P 1−Γ 1−Γ (2.65) 1 − ΓS11 S11 − Γ − 1−Γ 1−Γ (2.66) P2 15 = S21 P . These simplify to P 2 − Γ + S11 1 − P 2 Γ = S21 P (1 − Γ) (2.67) P 2 + Γ − S11 P 2 Γ + 1 = S21 P (1 + Γ) . (2.68) Solving for S11 and S21 gives S11 = S21 = Γ 1 − P2 (2.69) 1 − P 2 Γ2 P 1 − Γ2 1 − P 2 Γ2 . (2.70) Returning to the system outlined in Equations (2.67) and (2.68), and this time eliminating P 2 , leaves an equation for P in terms of Γ and the scattering parameters which is P = Γ − S11 . S21 Γ (2.71) By using this definition for P a solution for Γ alone can be found from a ratio of the scattering parameters as P 1 − Γ2 S21 = . S11 Γ 1 − P2 (2.72) Next simplify to S21 = S11 1 Γ −Γ . 1 −P P (2.73) Then gather all of the terms on one side and simplify to 2 − 1 − S 2 + S = 0. Γ2 S11 + Γ S21 11 11 16 (2.74) The following definition is typically used in the NRW methodology to simplify: X= 1 − (S21 + S11 ) (S21 − S11 ) . (S21 + S11 ) − (S21 − S11 ) (2.75) Using this substitution, it is possible to rewrite Equation (2.74) as Γ2 − 2ΓX + 1 = 0. (2.76) (Γ − X)2 = X 2 − 1. (2.77) X 2 − 1. (2.78) Next complete the square giving Then solve for Γ which leaves Γ=X± Selection of the correct root of Γ requires some care. To ensure a valid solution, Nicolson et. al. proscribe taking the root with |Γ| ≤ 1. With Γ found from the scattering parameters, it is trivial to find P using Equation (2.71). Recall also the definitions: P = e−j βII l Γ= ZII − ZI ZII + ZI 2 + k2 + β 2 = k2. kx y mn With these, a few short steps remain to solve for the material parameters. First equate the exponential form of P with the definition |P |ej φp −j n2π = e−j βII l , 17 (2.79) and solve for β leaving j ln |P | + j φp − j n2π . βII = l (2.80) Then solve for µ after applying the definitions of Z for the two regions leaving β µ (1 + Γ) µII = II I . βI (1 − Γ) (2.81) 2 = ω2µ ǫ − π 2 . βII II II a (2.82) 2 + π 2 βII a . ǫII = 2 ω µII (2.83) Next solve for ǫ from This gives Notice that in Equation (2.80) there are infinitely many solutions to the propagation constant βII as the value of n may be any integer. The complex surface wraps around on itself in a series of folded sheets. Typically, the proper sheet of the complex plane is dependent on the electrical length of the sample in the waveguide, which depends on the sample’s material properties. The n value will shift every half wavelength. With a determination of the proper n value, it is possible to extract a fully analytical solution for ǫ and µ. 2.2.3 Measurement Error Much of the error in the scattering parameters comes from violations in the assumptions of the NRW method. Non-planar surfaces with dents or protrusions cause higher order modes to be excited that are not expected. Flaws internal to the sample or other forms of non- 18 homogeneity also cause error in the measurements. Error introduced to the measurements is aggregated and nearly impossible to separate out. This combined error results in error for the extracted parameter values, as substitution of the errant scattering parameter values into the equations is unlikely to result in correct solutions. For well maintained equipment and proper calibration procedures, however, this error is quite low for NRW measurements. To ensure that the procedure is valid and that it was properly performed, measurement of a standard material is carried out for which the expected results are well known. This establishes a consistent comparison among experiments to show that they fit within the bounds of the usual error tolerances. 2.3 Conductor-Backed Samples Unfortunately, certain samples make the NRW method impractical if not impossible. In these cases, more creative approaches are required to get two independent measurements from which to extract the electromagnetic parameters. One critical example of this is a conductor-backed sample. These are typically thin coatings or layers of dielectrics which are impossible to remove from a metal sheet to which they are applied. Fully filling the cross section of the waveguide with the dielectric is only possible by setting the application surface normal to the longitudinal axis. In this case, however, the conductor backing will block the transmitted signal. With only one reflection measurement it is impossible to solve for both parameters. If the material is known to be non-magnetic, ǫ can still be found. It is impractical to consider filling a waveguide with the coating material especially if it is very expensive. There are other techniques that could be used, and this dissertation will investigate several of these that utilize evolutionary algorithms. The amount of error introduced by the techniques will be critical since the standard established by the NRW method is quite low. Given the difficulty of obtaining two independent measurements, more error may be tolerable, but it is still preferable to drive the error as low as reasonably 19 achievable. 20 CHAPTER 3 Material Characterization Using a Single Stepped Waveguide with a Conductor Reflection Plane As mentioned in Chapter 2, the introduction of a conductor backing causes the NRW method to be unusable. In order to overcome this difficulty, methods which rely on two reflection type measurements are often used. Examples include using two different polarizations or angles in free space. One such method for waveguides will be discussed in this dissertation. To develop this method, it will be important to understand the use of modal expansions for dealing with changes in waveguide dimension. 3.1 Introduction to the Mode-Matching Technique Satisfying the boundary conditions on transverse components of electric and magnetic fields can require a summation of many modes. Using orthogonality, it is possible to take the boundary condition equations and separate them into a system of equations. Each equation in the system is the result of multiplying the entire boundary condition equation by a single modal form and then integrating across the entire surface. Numerical solution of this system 21 of equations yields the constants that represent the relative strength of each mode to the incoming signal. The scattering parameters are all defined from ratios of these constants with appropriate phase terms. Calculating the scattering parameters from a geometry and material with known parameters is referred to as a forward type problem. The process of finding unknown material parameters from measurements of the scattered signals is an example of an inverse type problem. To successfully extract two complex parameters, two different measurements are needed. In this chapter using two different waveguide height transitions will be considered. When the waveguide changes dimensions, higher order modes are coupled into and must be considered to properly solve for the fields in the vicinity of the sample region. The number of higher order modes required can be reduced by changing the cross section of the waveguide in one of its two dimensions. Leaving the width of the waveguide unchanged means that only higher order modes that depend on the y dimension will be needed. One difference from the NRW technique that can not be overcome by this method is the destruction of the sample. With NRW methods, both measurements are taken from the same sample in the same size and shape. However, in order to use this method, the sample must be cut to the size of the smaller waveguide transition after being measured in the larger. Both methods require that the samples are homogeneous, and it is especially critical for the stepped waveguide. The choice of which portion of the sample to use will be significant if the sample is not truly homogeneous. It is also possible that batch to batch variations in the sample creation process could result in parameter differences, and the experimenter should account for this statistically regardless of which method is used. 3.2 Modal Expansion For the Waveguide Transition The derivation of modal fields using transverse and longitudinal decomposition is shown Chapter 2; for this derivation, the potential function approach will be used instead. The 22 same modal forms are obtained from either derivation, but it useful to know both methods. The derivation shown follows the approach in [5]. The Maxwell equations for a homogeneous, isotropic, source-free region are the starting point. In this case two vector potential functions are defined, A¯ and F¯ , by their relationship to the electric and magnetic fields as 1 ∇ × ∇ × A¯ E¯ = −∇ × F¯ + j ωǫ (3.1) ¯ = ∇ × A¯ + 1 ∇ × ∇ × F¯ . H j ωµ (3.2) TM to z modes are obtained by setting F¯ = 0 and A¯ = zˆψ which gives 1 E¯ = −j ωµˆ zψ + ∇ (∇ · zˆψ) j ωǫ (3.3) ¯ = ∇ × zˆψ. H (3.4) Alternatively, TE to z modes may be obtained by setting A¯ = 0 and F¯ = zˆψ, which gives ¯ = − (j ωǫ) zˆψ + 1 ∇ (∇ · zˆψ) H j ωµ (3.5) E¯ = −∇ × zˆψ. (3.6) Next break down the equations by component. Doing so for the TM modes gives 1 ∂ 2ψ Ex = j ωǫ ∂x∂z (3.7) 1 ∂ 2ψ j ωǫ ∂y∂z (3.8) Ey = Ez = 1 j ωǫ ∂ 2ψ + k2 2 ∂z 23 (3.9) Hx = ∂ψ ∂y Hy = − (3.10) ∂ψ ∂x (3.11) Hz = 0. (3.12) Separating for the TE modes gives Ex = − ∂ψ ∂y (3.13) Ey = − ∂ψ ∂x (3.14) Ez = 0 (3.15) 1 ∂ 2ψ j ωµ ∂x∂z (3.16) 1 ∂ 2ψ Hy = j ωµ ∂y∂z (3.17) Hx = Hz = 1 j ωµ ∂2 + k2 2 ∂z ψ. (3.18) To solve the equations, the form of the solution is found from the using separation of variables on Equation (3.9) or Equation (3.18). The form for the separation of variables is ψm,n = am,n e e,a ±j βm,n z f1 (x) f2 (y) . As can be seen in Figure 3.1, there are two regions in the analysis. 24 (3.19) Figure 3.1 Rectangular waveguide with single step and sample. 25 A superscript e denotes the excitation region (air filled section of waveguide connected to the network analyzer), while a superscript a denotes the analysis region (which contains the sample under test and has a length of d.) Subscripts m and n denote the modal dependence of the x and y functions. The opening between waveguides is Ω = y ∈ [0, h] and the conductor wall is ∆Ω = y ∈ [h, b]. Note b, h are the heights of the waveguide in their respective regions. Applying transverse component boundary conditions for the x and y components of E¯ ¯ , the potential equation for TE modes is and H ψm,n = am,n e e,a ±j βm,n z cos mπ e,a x cos kyn y . a (3.20) The total wave number is k e,a 2 = ω 2 µe,a ǫe,a (3.21) The constants from the separation of variables for y and z dependance depend on the region of the waveguide (due to the different heights and material properties), but since the two regions have the same width (and the propagating mode for the full sized waveguide is T E10 ) only m = 1 or T E1n and T M1n modes are needed to satisfy the boundary conditions. The e,a modal constant for y dependance is kyn , and the propagation constant for z dependance is e,a βn . These constants are all related by π 2 e,a 2 e,a 2 + k e,a 2 = βn + kyn . a (3.22) To simplify the equations the definition ωµe,a e,a Zn,E = e,a βn (3.23) is made for modes TE to z. Isolating the transverse dependance of the transverse components ¯ gives of E¯ and H 26 π π π e,a e,a e,a e,a e¯n (¯ ρ) = xˆ cos x kyn sin kyn y − yˆ sin x cos kyn y a a a e,a e¯n (¯ ρ) ¯ e,a (¯ h ρ ) = z ˆ × n e,a . Zn,E (3.24) (3.25) Note that ρ¯ represents a point on the plane transverse to zˆ. Following the same approach for the TM modes, application of transverse boundary conditions gives ψ1,n = a1,n e e,a ±j β1,n z sin π e,a x sin kyn y . a (3.26) Again the definition e,a β e,a Zn,M = ne,a ωǫ (3.27) is made. This leads to the transverse dependance of the transverse components π π e,a e,a ρ) = −ˆ x cos e¯n (¯ x sin kyn y − a a π e,a e,a yˆkyn sin x cos kyn y a (3.28) e,a e¯n (¯ ρ) ¯he,a (¯ n ρ) = zˆ × e,a . Zn,M (3.29) For a given mode (1,n) traveling in a single direction, the complete transverse electric field is e,a e,a ∓j βn z . E¯ρ = a± e ¯ (¯ ρ ) e n n (3.30) To calculate the entire electric field all of the modes, TE and TM, must be summed together for both forward and reverse traveling waves. While an infinite number of modes could be 27 used, it is possible to truncate the summation to a finite number of N modes and still achieve an acceptable level of accuracy. For the region z ≤ 0 the total electric field is N ez −j β1e z + e e ρ) e+j βn ¯ Eρ (¯ r) = a1 e¯1 (¯ ρ) e + a− . n e¯n (¯ n=1 (3.31) The transverse magnetic field is N e −j β1e z + e ¯ ¯ e ρ) e+j βn z . ¯ Hρ (¯ r) = a1 h1 (¯ ρ) e − a− n hn (¯ n=1 (3.32) For the region where z ≥ 0 the transverse electric field is N E¯ρ (¯ r) = az az a −j βn +j βn ρ) . e¯n (¯ b+ + b− ne ne (3.33) az az a −j βn +j βn ¯ (¯ h b+ − b− ne ne n ρ) . (3.34) n=1 The transverse magnetic field is ¯ ρ (¯ H r) = N n=1 a 2d − −jβn Because the second waveguide is terminated in a conductor backing, b+ , n = −bn e since the transverse components of the electric field must be zero at the conductor interface. Making this substitution results in the revised summary for z ≥ 0 E¯ρ (¯ r) = N az a −j βn ρ) − e+j βn (z−2d) e¯a b+ n e n (ˆ (3.35) az a −j βn ¯ a (¯ + e+j βn (z−2d) h b+ n e n ρ) . (3.36) n=1 ¯ ρ (¯ H r) = N n=1 a− 1 , setting a+ = 1 requires that the constant S = a− . The other 11 1 1 a+ 1 constants represent the strength of each mode relative to the incoming T E10 field. To Given that S11 = 28 construct a system of equations that will be used to solve for these relative amplitudes, the final boundary conditions are applied. The condition on tangential E¯ at z = 0 gives  N  a  N  ρ) 1 − e−j βn 2d e¯a b+ n n (¯ − e¯e (¯ e (¯ a+ e ¯ ρ ) + a ρ ) = n=1 n n 1 1    n=1 0 ρ¯ ∈ Ω . (3.37) ρˆ ∈ ∆Ω ¯ is applied only over the opening since that is the surface over The condition on tangential H which field expressions for both regions are defined. ¯ e ρ) − a+ 1 h1 (¯ N n=1 ¯ e ρ) = a− n hn (¯ N n=1 a 2d ¯ a −j βn hn (¯ ρ) b+ n 1+e ρ¯ ∈ Ω . (3.38) The E¯ condition equation is repeatedly scalar multiplied by a single modal expression and integrated across the entire boundary surface Ω + ∆Ω. In addition to orthogonality properties of trigonometric functions, the properties    α α l=0 lπ 2 (3.39) x dx = cos  α 0  α l = 1, 2, ... 2 β 0 sin2 lπ x dx = β 2 β (3.40) l = 1, 2, ... are also useful. The multiplied condition equations are  e ρ) + a+ e¯e (¯ 1 1 ρ) · e¯p (¯ N  e ρ) · e¯e (¯ a− n e¯n (¯ p ρ) dS = Ω+∆Ω n=1  N  a 2d a  −j βn  e¯n (¯ ρ) · e¯ep (¯ ρ) dS ρ¯ ∈ Ω b+ n 1−e p = 1, 2, ..., N . n=1 Ω+∆Ω    0 ρ¯ ∈ ∆Ω (3.41) To simplify the expression, the following definitions are made: Cpn = Ω+∆Ω e¯ep (¯ ρ) · e¯en (¯ ρ) dS 29 (3.42) Dpn = Ω e¯ep (¯ ρ) · e¯a ρ) dS. n (¯ (3.43) The multiplied condition equations become a+ 1 Cp1 + N n=1 N a− n Cpn = n=1 a 2d −jβn Dpn b+ n 1−e , p = 1, 2, ..., N . (3.44) ¯ condition equations, using a region modes, gives Following a similar process for the H   N ¯ a ρ) − ¯ e (ˆ ¯ a (ˆ ¯ e ρ) · h a+ h a− n hn (ˆ p ρ) dS = 1 ρ) · hp (ˆ 1 Ω n=1   (3.45) N a −j βn 2d h ¯ a (ˆ ¯ a ρ) dS , p = 1, 2, ..., N .  b+ n 1+e n ρ) · hp (ˆ Ω n=1 The following definitions allow further simplification: Epn = Fpn = Ω ¯ a (¯ ¯ e ρ) dS h p ρ) · hn (¯ (3.46) Ω ¯ a (¯ ¯ a ρ) dS. h p ρ) · hn (¯ (3.47) The multiplied condition equations become a+ 1 Ep1 − N n=1 a− n Epn = N n=1 a 2d −j βn Fpn b+ n 1+e , p = 1, 2, ..., N . (3.48) The system arranged in matrix form is    −Cpn Epn     ad −j β − n Dpn 1−e   an   Cp1  + =   a1 . ad + −j β n Fpn bn Ep1 1+e (3.49) Since there are (N x N ) members of each sub-matrix, the coefficient matrix is actually (2N x 2N ) in dimension. Thus the cost of solving this system increases greatly with additional modes. In order to fill the entries of the equation, it is necessary to arrange the mode order to follow a set, predictable pattern. To facilitate this, the following system has been introduced: 30 p, n even => TM mode p, n odd => TE mode. Thus p, n = 1 corresponds to the T E10 mode, p, n = 2 corresponds to the T M11 mode, p, n = 3 corresponds to the T E11 mode, p, n = 4 corresponds to the T M12 mode, p, n = 5 corresponds to the T E12 mode, and so on. Recall that there can be no T M10 mode or any other zero subscript TM modes. For each and every integral, it will be necessary to consider every possible combination of mode functions as well as the result of the integrals given the bounds of integration. Additionally, there can be special cases of constants that form a rational number when divided, which lead to special integral results as well. With the integrals evaluated, the matrix can be populated and Gaussian elimination can be used to solve for the unknown + constants a− n and bn . The entries of Table 3.1 summarize the function forms for both TE and TM modes which were derived earlier. Table 3.1 Summary of modal forms. TE modes odd index p,n TM modes even index p,n e,a e,a e,a e,a π π e¯n (ˆ ρ) = xˆ cos π a x kyn sin kyn y − yˆ a sin a x cos kyn y e,a kyp,n = (p,n)−1 π 2 b,h e,a e,a e,a e,a π π e¯n (ˆ ρ) = −ˆ x π a cos a x sin kyn y − yˆkyn sin a x cos kyn y e,a kyp,n = 31 p,n 2 π b,h 3.2.1 Integrals Evaluated by Type Evaluation of the various integrals in the derivation require precise formulas along with the identification of any special cases that must be evaluated differently. Each of the four previously defined integral types will be briefly covered here in turn. The following trigonometric identities are useful for evaluating these integrals: sin (A) sin (B) = 1 1 cos (A − B) − cos (A + B) 2 2 (3.50) cos (A) cos (B) = 1 1 cos (A − B) + cos (A + B) 2 2 (3.51) sin (A) cos (B) = 1 1 sin (A − B) + sin (A + B) 2 2 (3.52) C Type Integrals The C type integrals use two e region forms and are a b Cpn = 0 0 e¯ep (¯ ρ) · e¯en (¯ ρ) dydx. (3.53) As an example of a given case of the integral, consider two TE modes being multiplied, e . The result is T Epe · T En a b Cpn = 0 0 π kyep kyen cos2 x sin kyep y sin kyen y + a π 2 2 π x cos kyep y cos kyen y sin a a (3.54) dydx. For each integral and combination of modes, several distinct cases were identified (only some of which occur for a given integral because of the restriction on TM modes). Table 3.2 summarizes these cases, while Table 3.3 summarizes their results. 32 Table 3.2 Cases for Cpn . p = 1 and n = 1 Both T E10 modes p = 1 and n = 1 p corresponds to T E10 mode, n may be TE or TM p = 1 and n = 1 p may be TE or TM mode, n corresponds to T E10 mode kyep = kyen = 0 Happens when b/h is rational 0 = kyep = kyen = 0 The general case 33 Table 3.3 Results of cases for Cpn . p and n both odd Both TE modes p = 1 and n = 1 2 ab Cpn = π a 2 kyep = kyen = 0 Cpn = kyep 2 2 + π a Otherwise Cpn = 0 p and n both even Both TM modes kyep = kyen = 0 Cpn = kyep 2 2 + π a ab 4 ab 4 Otherwise Cpn = 0 p and n different One TE mode and One TM mode Always Cpn = 0 34 D Type Integrals D type integrals are a h Dpn = 0 0 e¯ep (¯ ρ) · e¯a ρ) dydx. n (¯ (3.55) There are many cases for these integrals and their results are summarized in Tables 3.4, 3.5, 3.6, and 3.7. 35 Table 3.4 Results of cases for Dpn . p and n both odd Both TE modes p = 1 and n = 1 2 Dpn = π2ah p = 1 and n = 1 Dpn = 0 p = 1 and n = 1 2 Dpn = π2a 1e sin kyep h kyp Dpn = ah 4 kyep = kyan = 0 sin Dpn = 0 = kyep = kyan = 0 2 kyan kyep − kyan h kyep − kyan a e a a π 2 + kyp kyn + 4 4 a a a π 2 − kyep kyan + 4 4 a Otherwise Dpn = 0 36 2 + π a sin kyep + kyan h kyep + kyan Table 3.5 Continued results of cases for Dpn . p and n both even Both TM modes Dpn = ah 4 kyep = kyan = 0 sin Dpn = 0 = kyep = kyan = 0 2 kyan kyep − kyan h kyep − kyan a π 2 a e a + kyp kyn − 4 4 a a a π 2 + kyep kyan + 4 4 a Otherwise Dpn = 0 37 2 + π a sin kyep + kyan h kyep + kyan Table 3.6 Continued results of cases for Dpn . p even and n odd TM mode and TE mode p = 1 and n = 1 e Dpn = π 2 sin kyp h kyep = kyan = 0 Dpn = 0 sin Dpn = 0 = kyep = kyan = 0 kyep − kyan h kyep − kyan π π − kyan + kyep + 4 4 sin π a π kyn + kyep 4 4 Otherwise Dpn = 0 38 kyep + kyan h kyep + kyan Table 3.7 Continued results of cases for Dpn . p odd and n even TE mode and TM mode p = 1 and n = 1 Dpn = 0 kyep = kyan = 0 Dpn = 0 sin Dpn = 0 = kyep = kyan = 0 kyep − kyan h kyep − kyan π a π kyn − kyep + 4 4 sin π a π kyn + kyep 4 4 kyep + kyan h kyep + kyan Otherwise Dpn = 0 39 F Type Integrals F type integrals are a h Fpn = 0 0 ¯ a (¯ ¯ a ρ) dydx. h p ρ) · hn (¯ (3.56) The following relationships are essential to evaluating these integrals: TE modes e,a e¯n (¯ ρ) ¯he,a (¯ n ρ) = zˆ × e,a Zn,E e,a ωµe,a Zn,E = e,a βn TM modes e,a ρ) e¯n (¯ ¯he,a (¯ n ρ) = zˆ × e,a Zn,M e,a βn e,a Zn,M = . ωµe,a Also useful is the vector identity ¯ · C¯ × D ¯ = B ¯ ·D ¯ A¯ × B ¯ · C¯ A¯ · C¯ − B ¯ . A¯ · D (3.57) Using this vector identity allows the F type integrals to be rewritten as 1 Fpn = a a Zp Zn a h 0 0 e¯a ρ) · e¯a ρ) dydx. p (¯ n (¯ (3.58) Noting that the integral with respect to y spans (0, h) instead of (0, b) and that the a region modal forms are used, the result is nearly identical to the result for C type integrals. The other difference is the division by the modal impedance values, which depend on the type of mode pair p, n is being integrated. The results are summarized in Table 3.8. 40 Table 3.8 Results of cases for Fpn . p and n both odd Both TE modes p=n=1 kyap = kyan = 0 Fpn = Fpn = 1 π 2 ah a a 2 Zp Zn a 1 ah a 4 Zpa Zn kyap 2 Otherwise Fpn = 0 p and n both even Both TM modes kyap = kyan = 0 Fpn = 1 ah a 4 Zpa Zn kyap 2 2 + π a 2 + π a Otherwise Fpn = 0 p and n don’t match TE mode and TM mode Always Fpn = 0 41 E Type Integrals E type integrals are 1 Epn = a e Zp Zn a h 0 0 e¯a ρ) · e¯en (¯ ρ) dydx. p (¯ (3.59) These are D type integrals with the modal indices switched and divided by appropriate modal impedance terms. Results are summarized in Tables 3.9, 3.10, 3.11, and 3.12. With the coefficient matrix populated, all that remains is to solve the system of equations via Gaussian elimination or some other technique. 42 Table 3.9 Results of cases for Epn . p and n both odd Both TE modes p = 1 and n = 1 Epn = n = 1 and p = 1 n = 1 and p = 1 kyen = kyap = 0 Epn = 0 Epn = Epn = 1 sin Epn = a a Zp Zn 0 = kyen = kyap = 0 1 π2h a a 2a Zp Zn 1 π 2 1 sin k e h a 2a k e a yn Zp Zn yn 1 ah a 4 Zpa Zn kyap kyen − kyap h kyen − kyap 2 a e a a π 2 kyn kyp + + 4 4 a a π 2 a − kyen kyap + 4 4 a Otherwise Epn = 0 43 2 + π a sin kyen + kyap h kyen + kyap Table 3.10 Continued results of cases for Epn . p and n both even kyen = kyap = 0 Both TM modes Epn = 1 sin Epn = a a Zp Zn 0 = kyen = kyap = 0 1 ah a 4 Zpa Zn kyap kyen − kyap h kyen − kyap 2 a π 2 a e a kyn kyp − + 4 4 a a π 2 a + kyen kyap + 4 4 a Otherwise Epn = 0 44 2 + π a sin kyen + kyap h kyen + kyap Table 3.11 Continued results of cases for Epn . p even and n odd TM mode and TE mode 1 sin Epn = a a Zp Zn 0 = kyen = kyap = 0 kyen − kyap h kyen − kyap π π − kyap + kyen + 4 4 sin π a π kyp + kyen 4 4 Otherwise Epn = 0 45 kyen + kyap h kyen + kyap Table 3.12 Continued results of cases for Epn . p odd and n even TE mode and TM mode 1 sin Epn = a a Zp Zn 0 = kyen = kyap = 0 Otherwise kyen − kyap h kyen − kyap π a π kyp − kyen + 4 4 sin π a π kyp + kyen 4 4 kyen + kyap h kyen + kyap Epn = 0 46 3.2.2 Accelerating the Solution In order to speed up the calculation time when sweeping across many frequencies, it is possible to isolate the frequency dependent terms into a subsection of the coefficient matrix and fill the other entries a single time. This method does require an extra step at the end to recover S11 since the term it comes from is also modified by the same factors used to isolate sub-matrices. The modified matrix is equation is  eC  −Zn pn       e a  Zn Zp Epn a Zn a Zpa Zn  a   1 − e−j βn d −  a D n pn  a  e  1 + e−j βn d Z   n    a ad   1 + e−j βn d  −j β n  1+e  + b  n F a pn a Zn 1 + e−j βn d    Cp1  + =  a1 . a Zp Ep1 (3.60) Simplifying this gives   −Z e Cpn n     e Z aE Zn p pn 3.2.3   a   a− 1 − e−j βn d n a    e D Zn pn   Zn   Cp1  + a  = 1 + e−j βn d  a1 . a   1 + e−j βn d  aE   Z p p1 aF b+ n a Zpa Zn pn Zn (3.61) Validation of Method for Solving the Forward Type Problem After coding was completed for a program to calculate the scattering parameter S11 from a given geometry and material properties, a process of validation was undertaken. This was necessary to ensure that there were no bugs in the code and that the results of using the code were accurate enough to use for the inverse type problem. Additionally, it was necessary to determine the relationship between the number of modes used to estimate the fields and the accuracy of the result. 47 The first step in the validation was to compare against results from a well known finite element analysis code. Ansoft HFSS was chosen for the comparison. The full height waveguide used was the standard WR-90, which is also known as X-Band waveguide. This is operated in a band 8.2-12.4 GHz. The relevant geometric dimensions are summarized in Table 3.13. Note the material properties of Eccosorb FGM-125, were obtained from [6]. 48 Table 3.13 Geometry and parameters for validation with HFSS. a 0.9 inch b 0.4 inch d 0.125 inch h 0.24 inch Frequency Range 8.2 GHz - 12.4 GHz e Region Material Air ǫr µr 1.0 − j 0.0 1.0 − j 0.0 a Region Material 1 Air a Region Material 2 FGM-125 ǫr µr 7.319669 − j 0.046408 0.575582 − j 0.484231 a Region Material 3 Teflon ǫr µr 2.1 − j 0.0003 1.0 − j 0.0 49 The results of this comparison showed that the code functions correctly. Comparisons were made at various frequencies and with varying number of modes. See Table 3.14 for a summary of some of the results. Table 3.14 Validation with HFSS, results. Frequency Material N Modes Code S11 (Mag., Phase) HFSS S11 (Mag., Phase) 8.2 GHz Air 20 1.0, 156.123 deg. 1.0, 156.22 deg. 8.2 GHz Air 500 1.0, 156.117 deg. 1.0, 156.22 deg. 9.52 GHz Air 500 1.0, 144.368 deg. 1.0, 144.49 deg. 10.4 GHz Air 500 1.0, 136.121 deg. 1.0, 136.39 deg. 8.2 GHz FGM-125 20 0.6870, 173.851 deg. 0.6873, 173.814 deg. 8.2 GHz FGM-125 500 0.6869, 173.845 deg. 0.6873, 173.814 deg. 9.52 GHz FGM-125 500 0.5833, − 178.1296 deg. 0.5813, − 178.125 deg. 10.4 GHz FGM-125 500 0.5793, − 170.445 deg. 0.5752, − 170.121 deg. 50 The next step of validation was to compare results for the code as the number of modes was changed. Example results for two different step heights filled with Teflon are shown in Table 3.15. The variations observed in the real and imaginary parts of the scattering Table 3.15 Code results versus number of modes, Teflon. Result 20 Modes 200 Modes 500 Modes Real (S11 ) , 0.24 Inch -0.553039030125828 -0.5528588478518682 -0.552861156078394 Imag. (S11 ) , 0.24 Inch 0.833062749860242 0.833182706503343 0.833180937026858 Real (S11 ) , 0.4 Inch -0.274771076055355 -0.274771076055356 -0.274771076055355 Imag. (S11 ) , 0.4 Inch 0.961434164866085 0.961434164866087 0.961434164866090 parameters as the number of modes were increased were very slight and occurred far beyond the level of accuracy for a typical network analyzer. Looking at the parameter changes graphically, as in Figure 3.2 and Figure 3.3, makes it clearer where they are below the tolerable threshold. Since the network analyzer typically introduces magnitude error on the order of 10−3 , a threshold of 10−4 was chosen. Reviewing the results it was determined that 30 to 50 modes was sufficiently able to provide the accuracy needed for solving the forward type problem. 51 Figure 3.2 Difference in magnitude of S11 between n and n-1 modes, Teflon. 52 Figure 3.3 Difference in magnitude of S11 between n and n-1 modes, FGM-125. 53 The final step in the validation was to compare the results for a full-height step. When the height of the step is set equal to the full height of the normal waveguide, a direct comparison to theoretical results is possible. This theoretical result does not require mode matching to calculate, and its key relationship is Z1a j tan β1e d − Z1e . S11 = Z1a j tan β1e d + Z1e (3.62) The summary of these results is found in Table 3.16. Table 3.16 Full height comparison to theory, 10.4 GHz, FGM-125. 3.3 Code, 500 Modes Theory Real (S11 ) −0.354526434450727 −0.354526434455986 Imag. (S11 ) -0.07985012206511159 -0.079850122091870 The Inverse Type Problem Having established the ability to accurately and quickly calculate the scattering parameter S11 from known parameters and step heights, an approach to extracting the parameters of unknown samples using two measurements is needed. To accomplish this, two different step heights are selected. The S11 values are measured at each step height, S11 (h1 )m and S11 (h2 )m . Guessed parameters, ǫg and µg , are then used to as an initial guess in a secant method used to solve the system of equations: (S11 (h1 ))m − S11 (h1 , ǫ, µ) = 0 54 (3.63) (S11 (h2 ))m − S11 (h2 , ǫ, µ) = 0. (3.64) The solution of the system would be a set of ǫ and µ that causes both equations in the system to be satisfied simultaneously. Thus, the true parameters may be found by varying the parameters until the system is solved. Generating synthetic data to test the algorithm is accomplished by using the true measured parameters and then adding noise. The system thus becomes (S11 (h1 ))m + δ¯1 − S11 (h1 , ǫ, µ) = 0 (3.65) (S11 (h2 ))m + δ¯2 − S11 (h2 , ǫ, µ) = 0, (3.66) where δ¯ values represent the synthetic error of the measurement system in vector form (|δ|ej θδ ). If there were no magnitude error (|δ|) these terms would be 0; however, if the phase error (θδ ) is 0, the terms can still be different from theory my the magnitude error. It was determined that the most effective approach to solving the inverse problem was to make one of the step heights equal to the full height of the waveguide. This allows the use of the Equation (3.62) to calculate S11 without resorting to mode matching, which can be time consuming. The critical factor to successfully recovering the parameters is the level of independence of the measurements. If the measurements are not sufficiently different, from change of step height or material properties, finding a solution is impossible. Even if the solution does exist, it can still be impossible to find it if the measurement error is too large. The required values of ǫ and µ to generate S11 values that satisfy the system of equations could be non-physical if the error is too great. There may even be no solution at all. It is possible to simulate measurements without actually building a test structure, and this is often done before investing the resources needed to perform actual measurements. This is done by simulating measurement errors and adding them to the perfect parameter values multiple times. Each system solved with the added error is like performing an independent 55 experiment. 3.3.1 Investigation of the Solution Technique for the Inverse Type Problem To evaluate the effectiveness of the solution technique, a series of numerical experiments was conducted. Initially, noiseless data was used to ensure that the routine was working correctly. Table 3.17 contains the expected S11 values when no noise has been introduced. With no added noise, the secant method was able to find the correct parameters successfully. Table 3.17 Noiseless data, two step sizes, 10.4 GHz, 500 modes. Material, h Real(S11 ) Imag.(S11 ) |S11 | Phase S11 Air, 0.24 Inch Air, 0.4 Inch -0.72079997 -0.47632417 0.69314313 0.87926975 1.0 1.0 136.1206 deg. 118.4456 deg. Teflon, 0.24 Inch Teflon, 0.4 Inch -0.55286116 -0.27477108 0.83318094 0.96143416 0.99923 0.99927 123.5664 deg. 105.9496 deg. FGM-125, 0.24 Inch FGM-125, 0.4 Inch -0.57123829 -0.35452643 -0.09615679 -0.079850122 0.57927 0.36341 -170.4450 deg. -167.3071 deg. Several trials were conducted, varying the height of the second step at different frequencies to see if this would have any effect on the extraction process. As an example, the results of such simulations with Teflon can be seen in Figures 3.4 and 3.5. Another example shows some of the results with FGM-125 in Figures 3.6 and 3.7. 56 Figure 3.4 Result of extracting parameters while varying the second step height used: Teflon, 9 GHz, no noise. 57 Figure 3.5 Result of extracting parameters while varying the second step height used: Teflon, 10 GHz, no noise. 58 Figure 3.6 Result of extracting parameters while varying the second step height used: FGM125, 9 GHz, no noise. 59 Figure 3.7 Result of extracting parameters while varying the second step height used: FGM125, 10 GHz, no noise. 60 Noise was added to the measured values to see what the the method could withstand. Unfortunately, it was discovered that very little noise would could cause the technique to become unstable. For a worst case scenario, noise similar to the reported network analyzer error was added in full for both magnitude and phase to the scattering parameters of the theoretical result. This noise was 0.004 for the magnitude of S11 and 0.8 degrees for the phase of S11 . It is not realistic to expect all of the error to contribute to a change in the same direction (statistically sampling the error is truer to actual measurement conditions), but it does provide an interesting set of results. Figures 3.8 and 3.9 show results for Teflon, while Figures 3.10 and 3.11 show results for FGM-125. 61 Figure 3.8 Result of extracting parameters while varying the second step height used: Teflon, 9 GHz, full noise. 62 Figure 3.9 Result of extracting parameters while varying the second step height used: Teflon, 10 GHz, full noise. 63 Figure 3.10 Result of extracting parameters while varying the second step height used: FGM125, 9 GHz, full noise. 64 Figure 3.11 Result of extracting parameters while varying the second step height used: FGM125, 10 GHz, full noise. 65 Repeating the investigation with increasingly smaller amounts of error showed that results began to normalize for many step heights at around 1/100 of the full error amount (0.00004ej 0.008π/180 ). Figures 3.12 and 3.13 show results for Teflon, while Figures 3.14 and 3.15 show results for FGM-125. This amount of error is far less than would be typically encountered with real measurements, and it tends to suggest that the method is too sensitive to error to be practical. 66 Figure 3.12 Result of extracting parameters while varying the second step height used: Teflon, 9 GHz, 1/100 noise. 67 Figure 3.13 Result of extracting parameters while varying the second step height used: Teflon, 10 GHz, 1/100 noise. 68 Figure 3.14 Result of extracting parameters while varying the second step height used: FGM125, 9 GHz, 1/100 noise. 69 Figure 3.15 Result of extracting parameters while varying the second step height used: FGM125, 10 GHz, 1/100 noise. 70 3.4 Conclusions While the method performs well with very low amounts of measurement error, it is clear that more realistic error amounts cause the method to become unstable and fail. Looking closely at the noiseless data it is not difficult to see why the technique did not succeed. Very little change occurs between air and Teflon, or between changes of step size among the various materials. Since the samples are very small and seated very near to conductors on all but one side, the tangential electric field interrogating the sample is very small. The contrast between the reflection measurements is simply insufficient to provide a useful extraction. After considering this technique, the focus shifted to finding some method for obtaining transmission measurements while still utilizing stepped waveguide transitions. To do this, it was necessary to develop a way to analyze multiple waveguide transitions which do not terminate in a conductor wall. 71 CHAPTER 4 Theoretical Introduction to Multi-Step Waveguides 4.1 Introduction To analyze the effect of several successive transitions of waveguide dimensions, as seen in Figure 4.1, it is necessary to keep the real goal of the analysis in mind, developing a set of expressions which properly estimate the scattering parameters at the network analyzer ports. Figure 4.2 shows a rough sketch of a single pair of transitions within the system. For this kind of structure, several key assumptions greatly simplify the analysis. Figure 4.1 Side view of rectangular waveguide with multiple dimension transitions. 72 Figure 4.2 Side view of rectangular waveguide with two dimensional transitions. 73 First it is assumed that the length of the standard rectangular waveguide sections, which are closest to the network analyzer ports, are sufficiently long that only the T E10 mode enters from the feed port, returns to the feed port, and reaches the transmission port. This is because the standard waveguide is designed to only propagate that mode, while all the other modes attenuate before reaching the ports. All of the other modes considered in the analysis only matter in the regions near the transition boundaries and are used to satisfy the local boundary conditions at each one. Because of this, the system of equations that needs to be developed expresses all of the modal amplitudes in terms of the incoming T E10 mode. Note that the modal expansions of the fields at each transition of the waveguide can be found using the method of moments, finite element methods, or mode matching. Next it is assumed that the lines to the network analyzer are perfectly matched, and thus no signal reflects back from the port interfaces into the stepped region. This means that S11 = (b1 )1 , (a1 )1 where (b1 )1 is the amplitude of the dominant mode of the returning signal at the first interface and (a1 )1 is the amplitude of the dominant mode of the incoming signal at the first interface. Also, S21 = aQ 1, (a1 )1 where aQ is the amplitude of the dominant outgoing mode from the last, Qth interface. 1 The analysis is further simplified by assuming that each section of the waveguide is filled with a single homogeneous, isotropic material, though not all sections necessarily have the same filling material. Note that the system will not possess symmetry unless the entire set of section transitions is entirely symmetric. The system will posses reciprocity regardless of symmetry, and thus S12 = S21 . 74 4.2 Theory for Successive Waveguide Transitions While it is possible extend a T-parameter cascading technique to analyze successive transitions with multiple modes, this approach requires inversion of ill-conditioned matrices. Instead a recursive approach may be used. To do this, a sequence of matrix equations are used to solve for the relationships between the modal expansion coefficients at each section interface. For each mode a phase change, e q ∓j βm,n ∆q , (4.1) q occurs between each sectional transition of the waveguide, where βm,n is the propagation factor for the region q and ∆q is the length of waveguide section q. It is helpful to define the matrix Pq which has members Pq i,j = δij e q −j βi ∆q , (4.2) where δij is the Kronecker delta. Since this matrix occurs many times in the analysis and the terms of this diagonal matrix are constant, this expression need only be calculated once for a system of transitions and set of modes. The interfacial transmission matrix Tq relates the modal amplitudes of waves that cross transition q to the waves that enter, in the case that both regions are semi-infinite. Note that T − will be used to denote transmission of a reverse traveling wave. The reflection matrix Γq relates the amplitudes of waves reflected back from an interface to the waves that entered in the case that both regions are semi-infinite. Note that Γ− will be used to indicate reflection of a reverse traveling wave. While this semi-infinite condition occurs only at the beginning and ending of the system, this definition helps to simplify some equations within the system. When the regions are not both semi-infinite, the reflection relationship is defined as bq = Rq aq . This accounts for the effects of the portion of the structure beyond 75 the interface boundary, q. This derivation, which is also in [7], gives a multi-modal expansion of the planar interface problem also discussed in that text. The relationship between each interface’s amplitudes is complicated by the successive internal reflection within the regions. Taking this internal reflection into account, the relationship is aq+1 = Pq Tq aq + Pq Γ− q Pq Rq+1 aq+1 . (4.3) The other needed relationship is between the reverse traveling waves and the forward traveling waves. Again the internal reflections must be accounted for. The relationship is bq = Γq aq + Tq− Pq Rq+1 aq+1 . (4.4) Making the definitions: Γ− q ′ = Pq Γ− q Tq′ = Pq Tq ′ = Pq Rq+1 Rq+1 −1 ′ ′ R Tq′ , τq = I − Γ − q q+1 where I is the identity matrix, Equation (4.3) becomes aq+1 = τq aq (4.5) Substituting Equation (4.5) into Equation (4.4) gives ′ τ a . bq = Γq aq + Tq− Rq+1 q q 76 (4.6) Next making the substitution bq = Rq an , ′ τ . Rq = Γq + Tq− Rq+1 q (4.7) This equation is the key recursion relation. The constants need to be calculated through each section of the structure using mode matching or some other technique with knowledge of each layer’s properties and boundary conditions. This is the same process that was used in Chapter 3 to analyze a single step with a reflection plane. If instead of a conductor wall, there is a perfectly matched line to the network analyzer, RQ = 0. In general terms, the process continues RQ−1 = ΓQ−1 , (4.8) − R′ RQ−2 = ΓQ−2 + TQ−2 Q−1 τQ−2 , (4.9) then and so on. This process is complete with the calculation of S11 = (R1 )11 , since the first entry is always reserved for the dominant mode T E10 . Further, the τ terms calculated at each stage of the recursion to S11 provide an easy way to calculate S21 . This comes from aQ = τQ−1 aQ−1 aQ−1 = τQ−2 aQ−2 ... a2 = τ 1 a1 thus, 77 aQ = τQ−1 τQ−2 ...τ2 τ1 a1 = τ a1 . (4.10) This means that (τ )11 = S21 . The entire process may be carried out in reverse order to obtain S12 and S22 if desired. Note it is not necessary to terminate with a matched line; a total reflection or other known termination may also be used. Also, any intermediate section can be altered without affecting the recursion of the sections beyond. This means that those sections do not need to be recalculated after such an alteration. Further specifics are not included in this dissertation, but an overview of the mode-matching approach and some of examples will be provided in the third edition of [7], which is expected to be published in 2018. 4.3 Application of Recursive Mode-Matching An established FORTRAN code already existed to perform the recursive mode-matching required for the topics discussed in the following chapters of this dissertation. Interfacing with this code was accomplished through the use of text files, which carried the important parameters required for the calculations back and forth. Modal settings, waveguide section dimensions, and material parameters were required to perform the recursion. Rather than spending time detailing this preexisting code, this dissertation focuses on the algorithms developed to apply this technique to actual electromagnetics problems. 78 CHAPTER 5 Theoretical Background for Evolutionary Optimization and Search Algorithms 5.1 Introduction This chapter provides a synthesis of various concepts learned in a sequence of classes on evolutionary algorithms and multi-dimensional optimization. Many of the concepts are well introduced in the text by Deb [8]. Others were gleaned from presentations and lectures attended over several years or are the result of personal experience from developing evolutionary algorithms used in various projects, several of which are included in this dissertation. The terms and concepts discussed here form the basis for the approaches taken in developing algorithms to solve these design problems. 5.2 Optimization Theory Functions are mappings from one vector space to another. For optimization purposes, it is often convenient to describe engineering problems in the context of two such spaces, 79 the solution space and the objective space. A point in the solution space represents the collection of independent variable values input to the function. A point in the objective space represents the relative fitness with respect to each separate measure used. While the unique combinations of inputs are distinct from each other, different input combinations can have the same fitness values. Inputs represent the various characteristics of the design. Describing all features in terms of mathematical variables can be challenging, but it is necessary to allow the algorithm to quickly analyze alternative designs. This process also allows incremental improvement toward the best design possible with the constraints that exist and the resources available to handle them. Some factors cannot be controlled. These still impact the fitness of the design, and because they can be correlated with other factors, they need to be included in the analysis. Different algorithms take different approaches to finding the optimal combinations of the controllable factors. Traditional techniques from mathematics use derivative, or rate of change, approaches. The greatest advantage of these algorithms is their simplicity. Unfortunately, the functions that arise in design problems are often complicated and nonlinear. For features that are not continuous or are not quantitative, the notion of a derivative may not even be valid. Design problems often possess many local optima and saddle points, which confuse the process further. In contrast, some key features allow evolutionary algorithms to avoid these pitfalls. One feature is a population based approach. Having multiple potential designs simultaneously evaluated allows for global comparisons to be made among the points. This allows the algorithms to identify the traps of local optima and saddle points. Traditional algorithms can be used in a similar fashion, but networking and communication mechanisms must be built around them to make comparisons between population members. Another feature of evolutionary algorithms that is different from more traditional optimization algorithms is the method of conveying information about the past through inheri- 80 tance. Traditional algorithms often retain no information about the past and instead make a local evaluation of the immediate surroundings at each iteration. This means that the algorithm may retry certain values many times as it backtracks over parts of the space it has already checked. This is especially common in regions where the changes are all very small, and thus, the space appears to be very flat. 5.2.1 Survey of Some Traditional Optimization Algorithms There are many well developed methods for searching mathematical spaces. Some seek to find optima of functions, while others search for roots of functions. Because these methods are well developed and have been in use for a long time, they are almost always preferable to a random search or evolutionary algorithm. Unfortunately, they are limited in their applicability to certain problems. It is important to understand the limitations of traditional approaches to search methods to see where other algorithms ought to be applied instead. 5.2.2 Gradient Method (Steepest Descent) Several texts, including [9] and [10], outline this classic approach to finding a local minimum or maximum of a function, f (¯ x) . The gradient method is an iterative approach, which repeatedly seeks the optimum of a parametrized function parallel to the direction of greatest slope from the most recently evaluated point. To optimize the function f (¯ x), the method employs the gradient, ∇f (¯ x) to find the direction of the greatest slope. The direction followed, positive or negative, depends on whether a maximum or minimum is sought. It then constructs a parametric equation g (t) = f (¯ x ± t∇f (¯ x)) and seeks the optimal point with respect to the parameter t using the derivative evaluated at the point of an initial guess, x¯1 , ∂ ∂ g (t) = f (¯ x1 ± t∇f (¯ x1 )) = 0. ∂t ∂t 81 (5.1) After finding the optimal t value, t1 , the next guess, x¯2 , is computed from x¯2 (t1 ) = x¯1 ± t1 ∇f (¯ x1 ) . (5.2) The process is then repeated with x¯2 in place of x¯1 and t¯2 in place of t¯1 in equation (5.1) and equation (5.2) and so on until the optimal value of x is found. As a search routine, this method is effective but can be slow for gradually sloped functions. Functions with multiple local optima can present a problem to this algorithm since the optima found will depend strongly on the initial point evaluated. If the fitness mapping is not unimodal, the local optimum closest to the starting point is the one that will be found. 5.2.3 Golden Section This approach to optimization was developed from the paper by Kiefer [11]. This deterministic algorithm searches through an interval, ∆, in predetermined subdivisions defined √ 5 ≈ 1.618034. Define the distances a = ∆ and b = 1+ ∆ . using the value φ = 2 1+φ 1+(1/φ) Calculate the function value at the endpoints and at the point of distance a away from the left endpoint. Then subdivide each subinterval using the same ratio. With five points and their function values known, it is possible to find a new pair of adjacent intervals with the optimum inside. Like the Gradient Method, this approach can get stuck on local optima if they are encountered first due to the choices of the original interval bounds. It is greatly beneficial that the requirement of differentiability of the function is removed since only function evaluations are required. It might be possible to extend the method to higher dimensional spaces using higher dimensional intervals. 82 5.2.4 Secant Methods The Newton-Raphson method is well known and uses slopes of tangent lines of a function to find its way to roots of the function. When secant lines are used instead, the methods are referred to as secant methods. The texts [12] and [10] provide a good introduction to both methods. A straightforward implementation of the secant method for two complex functions of two complex variables used in these projects. Like the other classical methods mentioned, this algorithm is susceptible to local issues near the initially guessed points. Also, while it often converges to an answer quickly, it sometimes does not converge at all. This is especially true if the initial guesses are far from the true optima and if the initial step sizes are not scaled well. Further, the method may produce no result if there is no root near the initial guesses. 5.3 Basics of Evolutionary Algorithms An excellent introduction to basic genetic algorithms may be found in [13]. Generally, these grew out of attempts to recreate or mimic biological systems of reproduction and evolution. The algorithms usually possess common characteristics borrowed from the cycles observed in nature. Selection based on fitness, reproduction, and mutation provided the useful building blocks of adaptation to a challenge or environment. 5.3.1 Chromosomes Information about an organism’s shape and characteristics is contained within its genetic code. In a genetic or evolutionary algorithm, genes encapsulate the information in discrete packets which may be easily interpreted and changed. The collection of genes is referred to as a chromosome. Genes may be subdivided into three loosely defined types. • The first type is the standard gene; as mentioned above, these carry the information 83 about the population member that define its unique qualities and characteristics. • The second type are markers or tracers. These preserve information about the experience and origins of a population member and do not necessarily find open or outward expression. Their purpose is to inform the user about key details of how the algorithm is functioning. • The third category are controls. These can be used to change the arrangement and shape of the chromosomes. They can even use the information from the markers to alter the genes themselves. The nature of the chromosomes determine the search space of the algorithm. While the earlier genetic algorithms utilized binary genes, modern methods utilize many kinds of genes. Real numbers, integers, and even character strings can be used, but great care is required to ensure that the operations such as mutation are adapted appropriately for these other types of genes. Every designer is unique, and as a result there are many correct ways of establishing the chromosomes for a particular problem. Some types of genes are easier to use for expressing certain kinds of characteristics, while others work better with certain kinds of operators. The most important principles to follow are to always introduce as little complexity as is needed to handle the problem and to utilize gene structures that can be easily switched in and out. This modular approach allows complexity to be added or removed as needed, and makes it possible to replace components of the algorithm when they are not working as expected. The algorithms utilized in the projects described in this dissertation tend to be somewhat simple, utilizing standard genes only; however, many of the alterations to the algorithms made during the initial phases of the projects could have been made automatically by more sophisticated algorithms, utilizing all three types of genes. An effort will be made to note the points where this next level of algorithm can be implemented in the future, in the discussions about the algorithms shown here. 84 5.3.2 Fitness A more fit design better satisfies the needs of the designer. The definition of fitness is somewhat subjective. In the context of survival, it can refer to the relative chance of continuing to exist in a hostile environment, or it can even represent the chance of reproducing and passing on unique characteristics or traits. The ultimate goal of an engineered design is to meet some need or provide some function as effectively as possible. Thus, from the perspective of engineering problems, the success of a design in meeting design requirements and criteria can be an effective measure of fitness. Maximization and minimization are useful notions carried over from the traditional optimization world, and can provide a framework for the structure of the fitness measure. The scale should be consistent and quantifiable. In other words, it should be possible to express every component of fitness in mathematical terms and to determine the value of each by rules and definitions. For characteristics that are qualitative in nature, some quantifiable expression must be derived in order to contribute to fitness. Consequently, qualitative parameters are often optimized from the standpoint of distribution or arrangement. In a sense, the fitness mapping of the population members is the most important facet of the algorithm. One approach that is particularly useful is to construct a weighted sum of independent terms, each of which comes from a different part of the chromosome. Many engineering relationships are higher-order, correlated, or even nonlinear, which can make this impossible in practice, but it is still a good idea to limit such connections across the chromosome. One of the advantages to this way of visualizing the fitness mapping is the convenient separability of the function into smaller subdivisions. This makes multidimensional algorithms easier to implement since the several fitness functions can be reorganized as desired simply by regrouping the terms. Some of the terms can even be duplicated in more than one fitness function, which has the effect of assigning that term more weight in the problem. In any 85 case, the weights of the various components become very important to the soundness of the final solution of the algorithm. Often, these weights need to be reevaluated in the middle of the process and tuned to better reflect the knowledge that the developer has obtained since starting the process. 5.3.3 Constraints Versus Penalties During the creation of the fitness function, there are portions of the search space that are identified as physically impossible or extremely undesirable. Such areas provide a temptation to prevent or constrain the algorithm from trying such solutions. While it is certainly valid to do this, there can sometimes be advantages to allowing the algorithm to assign a fitness to such a region. It is necessary to punish the member of the population falling in such a region in order to properly reflect its infeasibility. This can be accomplished by adding a properly scaled reversal of fitness. Such penalties allow the algorithm to pass through such a region to a more acceptable set of parameters on the other side without getting trapped on its edge at an apparent optimum which simply isn’t real or isn’t actually achievable. In other words, the penalty should cancel out any local optima of the fitness in a region of the space where solutions are not allowed to exist. The scaling of the penalty can be tricky and should probably be reassessed several times during the development of the algorithm to take advantage of knowledge obtained from trial runs. 5.3.4 Driving Evolution of Designs In biology, adaptations can take many generations to emerge. Randomness in the process of inheritance leads to situations where children are not actually more fit than parents. With this in mind, operations designed to encourage improvements in the population should be tailored to focus on specific parts of the chromosome and to work over many generations. If the operations are overlapping in their focus, they risk undoing improvements previously 86 made. If they are not designed with a long term plan, they can turn back over regions already explored, wasting time. One can classify methods and operators for accomplishing change three ways: • Subtractive methods focus on removing unfit members from the population or from the mating pool. • Additive methods focus on generating new members for the population. • Mutative methods focus on making changes and improvements to existing members. Most algorithms use all three of these methodologies to some extent. An algorithm distinguishes itself from others by the choices it makes for these processes. In concert with these methods, there are two approaches that can be used to implement them. • The environmental approach for an algorithm focuses on the conditions in which the population exists. Examples include tests for survival, opportunities for reproduction, and situational alterations to fitness evaluations. This approach makes choices that affect all population members simultaneously. • A more local, adversarial approach focuses on pitting individuals within the population against each other. This approach can be purposely arranged or random. Again, most algorithms mix these two approaches to some extent. Different combinations of tools can be used, and the mixture of methods can be adjusted over time and in different situations. Many tools exist to implement contests of fitness between population members. Tournaments and dart-boards are two examples. These tools can also be determined by things other than fitness such as having certain features or possessing a particular gene. Another way of comparing algorithms is by the level of randomness or determinism involved. Advantages to determinism include repeatability, predictability, and the ability to guarantee a complete search of the space will take place if desired. Since there is usually little 87 knowledge about the location of the optimum beforehand, too much determinism can waste a lot of time searching in the wrong places which results in longer run-times. In contrast, stochastic properties introduce the opportunity to achieve quicker results. The random elements can also reveal unpredictable traits or properties. They do however increase the risk of repeating solutions that were already tried. If unlucky, too much reliance on chance can cause an algorithm to become stuck or miss regions of the search space entirely. 5.3.5 Inheritance and Crossover Given that there are very many possible ways of handling inheritance, it is difficult to describe them all here. Basic principles can be outlined, however. The most important principle is to generate new population members close to the most fit members of the previous generation. Taking the genes of fit parents and recombining them produces a child which explores the space between its parents. Unlike biological reproduction, there can be algorithms that use more than one or two parents. If the child is more fit than any parent, then the global optimum might be in that region. Further exploration of this region is accomplished by generating additional children in the area. This becomes slightly reminiscent of the golden section search if a child is created between members of two different generations. If the region being explored has a local optimum instead of the global optimum, then creating additional children there is a waste of computational time. Transfer of the genes to a child can be done individually or in groups. For binary algorithms, crossover operations can involve exchanging gene groups from one part of the chromosome to another. This is a disruptive process for the algorithm, as parents had gene values that led to good fitness values. Wholesale disruptions of gene arrangements can actually result in worsening fitness. Elitist algorithms preserve the best genes, and are thus defended from this disruption. Another option is to create many children with varying degrees of disruption. For example, using a variable to control how many genes come from each parent allows some children to be almost identical to one parent but still have some genes 88 contributed from others. This can also involve more complicated chromosomal structures. It should be understood that the more children that are created, the more computational time will be required to evaluate their fitness. For this reason, it is often better to balance the innovation of gene exchange with time and population concerns. 5.3.6 Diversity Given the mechanisms for transferring the legacy of previous generations, it is easier to see the importance of diverse populations for exploration of the solution space. A child created from a diverse population has more clusters of different gene combinations to inherit. The overall tendency of the algorithm however is to eliminate less optimal solutions. As a result, diversity is usually eliminated over time, as the solutions tend to congregate at the currently known optima. If these are local optima, however, the algorithm risks becoming trapped. One method of preserving diversity, with the trade off of slower convergence, is to create special archives or niches where conditions are structured differently from the main algorithm. Each of these niches has, in effect, a different mapping to the objective space. Since the objectives in the niche are rearranged, a different optimum is indicated. This does increase the run-time of the algorithm, because new solutions created in the niche are not necessarily optimal in the overall sense of the term. It is important to communicate what each niche has learned with the others through interbreeding, otherwise the benefits of the diversity are lost. Elitist archives are a more focused way to preserve diversity. They preserve a few solutions which are optimal under the conditions that could be used to form a niche; however, they do not breed locally. Instead, the archive’s members are bred with each other. Thus all new children explore the space between the previous generations’ best members from every perspective. As long as the proximity of children to a given parent is allowed to vary, a full exploration of the solution space between the elite members is possible. 89 5.3.7 Mutation Ultimately, the process of finding the optimum requires that all of the genes of the optimal solution be identified. A fit member of the population has some of the genes at their optimal values, but must change the others in order to find the true optimum. Recombination of the known genes only explores a portion of the solution space, if the initial randomly selected population does not contain a complete basis set for it. The process of mutation provides a hedge against this problem. Random changes to one or more genes of a chromosome is mutation. The changes may improve fitness, but they might also make it worse. There isn’t really a way to predict the effect of a change ahead of time, especially when many of the genes may be correlated with each other. In a traditional search algorithm, local numerical derivatives are applied to find the direction of optimal change. This is analogous to creating a child with a single gene changed from the parent in question for every gene in the chromosome. If the fitness function is expensive to evaluate, this can massively increase the run-time of the algorithm. If instead a few directions are selected at random and tested, it is possible that the optimal direction can be found much more cheaply. It is also possible that the true optimal direction will be missed entirely. The advantage is realized when the direction of best change is at least partially found; the next generation may find another portion of the genes needing to be changed. The results mark out a back and forth path across the search space, but they hopefully inch closer to the optima over time. Care is required when using mutation. If the level of mutation is too great, it can eliminate any advantage from inheritance and the results of previous generations. Effectively, this turns the algorithm back into a random search. A balance of the various pressures on the chromosomes tends to produce the best results. Only experience with a particular solution space can give insight into the correct balance for a given problem. Well designed algorithms will allow for readjustment, automatic or manual, as more is learned about the problem. 90 5.4 5.4.1 More Advanced Concepts Real Number Genetics The binary gene structure is fairly easy to construct, but using a real number gene structure is much more difficult. Using two possible values means that a single bit-flip can radically alter the design. With countable numbers of gene values, there is a limit to the number of possible combinations of gene values. However, real number genes possess an infinite number of combinations. Additionally, changes to the gene can be very small or very large. Algorithms designed around real numbers have to apply the various components very carefully to ensure that the intended results still occur. 5.4.2 Modifications to Inheritance An option for inheritance becomes possible with real numbers or integers that is not possible with binary numbers. A child can inherit a weighted average of the parents rather than merely taking the value from one in particular. This is a useful feature, as it combines the steps of crossover and mutation. Another option for doing crossover is a simulated binary crossover. This attempts to create a bimodal distribution with centers near the parents’ chromosomes. This replicates useful features of binary crossover applied to binary coded real numbers without the scale issues typically encountered. Changing one bit in a binary coded real number can produce a massive shift in the value of the number. The bimodal distribution has the greatest probability mass near the parents and thus avoids this issue. 5.4.3 Modifications to Mutation For an integer based algorithm, a simple uniform distribution of mutated gene values can achieve useful searching. For real number based algorithms, the distribution of mutations gains more importance. Massively disproportionate changes in gene values may not be 91 productive. A pseudo-Gaussian distribution for mutations can be useful for real numbers, since it does not completely abandon large changes to a gene but instead makes them much less likely. This allows the algorithm to focus on small changes without making escape from local optima impossible. Another useful variation is to use real number operations but to round before doing the fitness mapping. This allows the simplified mapping to a countable number of configurations without losing the benefits of real number approaches to mutation. 5.4.4 Multiple Dimensions of Fitness While a single fitness function can accomplish multiple objectives, there is always the question of the relative weights among fitness components. Consider running the single dimensional algorithm with different weights several times and then comparing the results. Some might agree on the area of optimal fitness, but others might find different areas of the solution space to be better. Separating the notions of fitness into multiple distinct perspectives allows the algorithm to alternate among different directions of the overall fitness landscape as it moves toward a global optimum. The true benefit of multidimensional approaches comes from running these perspectives simultaneously. Children from subsequent generations come from choices based on alternating perspectives and they may eventually be mated. This creates flexibility in which aspects of fitness are focused on with different reproduction events, and this flexibility allows the algorithm to escape from local optima and traps over several generations. One way of visualizing this process is by looking at unit vectors in the objective space. Different weights establish vectors pointing in different directions in the hyperplane. Extending the vectors to infinite lines and following the population from one generation to another would reveal a path crossing from one line to another as the mating decisions over several generations were made based on different weight vectors. Benefits to the run time of the algorithm can be realized when these approaches can be 92 pursued in parallel. This must be done with care since without the sharing of information among the parallel perspectives the benefits of multiple dimensions are largely lost. 5.4.5 Domination Concept The notion of domination is another useful tool for evaluating fitness. To strongly dominate another point in the objective space, the dominating point must be truly more fit with respect to every fitness function. A point can weakly dominate another point by being equally fit for some fitness functions and better in the others. Points that are better with respect to one dimension of the space but worse with respect to the others, are not dominated. The curve connecting the non-dominated points is known as an optimal front. When all of the points on an optimal front represent the truly best points achievable on their directional vector, the front is known as a Pareto optimal front. Typically, dominated points are discouraged in the algorithm. It can be useful to keep weakly dominated points in the population, since this is the only way to find ’flat’ sections of the optimal front. It has been found to be helpful to strongly punish exact duplicates, however, as these points tend to take over the population and destroy the solution space diversity needed to innovate out of local optima. 5.5 Closing Remarks In deciding how to approach design problems, it is important to weigh certain factors carefully. First, check to see if an already well established algorithm can get to a satisfactory answer. Most such algorithms have benefited from years of fine-tuning and improvements and tend to get answers faster to general problems. While an algorithm perfectly applicable to a specific problem might not exist, a variation or modification might be possible that allows the use of an established algorithm. If no satisfactory algorithm is available, start from one or two concepts that are easy to 93 implement and add complexity as required to achieve the desired results. Modular algorithm design is very helpful as it allows different components of algorithms to be switched in and out. Recognizing where certain approaches have useful properties allows one to put together these tools in combinations that best accomplish the basic tasks of a search algorithm. What evolutionary algorithms all have in common is a dependence on the quality of the fitness mapping. With a properly defined fitness mapping, it is only a question of time to find an acceptable design. 94 CHAPTER 6 Designing Surrogate Magnetic Materials for Nicolson-Ross-Weir Waveguide Measurement Validation 6.1 Introduction As mentioned in Chapter 2, validating the performance of a material measurement system is undertaken with a sample possessing well known material properties. When the material under test exhibits only electric properties, a block of dielectric such as Rexolite or Teflon may be used as the verification standard. Unfortunately, materials with significant magnetic properties at microwave frequencies have characteristics that are difficult to reproduce in a reliable manner. Since these materials usually consist of magnetic particles dispersed within a polymer matrix, there is significant variability between samples, and materials with highly predictable properties cannot be reproduced reliably. Therefore, there is a need for a surrogate material sample that can be constructed with high accuracy, having the equivalent magnetic properties of typical microwave materials when characterized using the NRW method. 95 Recently, an all-metal material surrogate was proposed by Crowgey et al. [14] that can be duplicated with high precision using normal machining techniques. When characterized using the NRW method, the surrogate produces properties similar to those of typical magnetic materials and thus can be used to verify the proper performance of the measurement system. However, since the structure is assembled from several stacked waveguide irises, the procedure used to validate the measurement system is not the same as that used to measure the properties of sample materials. In this paper, a new waveguide surrogate is proposed that can be inserted into the waveguide in the same manner as the sample under test, thus producing a verification procedure identical to the normal measurement procedure. The proposed material surrogate is a solid metallic block that fills the waveguide horizontally, but not vertically. A specified number of segments are machined into the block producing a structure with vertical steps that can be easily analyzed using mode matching. By using a genetic algorithm, the heights and widths of the segments are chosen so that the equivalent material parameters, when extracted using the NRW method, are smooth and slowly changing across the measurement band, and best match those of specified target values. Assuming that the metallic structure is perfectly conducting, the imaginary parts of µ and ǫ either are zero, or have opposite signs [15]. Since positive imaginary parts are unphysical, the imaginary parts of the target parameters are set to zero. An additional benefit of the proposed material surrogate is that it is smaller and can be made lighter than the solid blocks of dielectric that are currently used as verification standards. Examples of such dielectrics include Rexolite, Teflon, and High Density Polyethylene. Many organizations use large waveguides, such as WR-1800 or WR-975, to measure material properties in the low microwave and UHF bands. A possible sample for the WR-1800 waveguide, an 18 in. x 9 in. x 2 in. block of Rexolite 1422 weighs approximately 12.312 lbs. (5.585 kg.). Such large samples are cumbersome to maneuver and difficult to insert into large waveguides. The solid metal surrogate may be hollowed out to reduce its weight, and the 3-D printed standard is inherently much lighter than a full-size sample. Ease of insertion 96 significantly reduces the labor required for verification and also provides a safer alternative to pounding large blocks of material into the waveguide with a mallet, as is done in practice. The procedure for designing the proposed verification standard is outlined, and designs for two waveguide bands are described. Measurements of the standards, fabricated both through a conventional machining process and by 3-D printing and electroplating, are provided. Excellent agreement between the measured material parameters and those predicted by analysis validates using these standards for verification purposes. 6.2 Design Methodology and Evolutionary Algorithm The final forms of the evolutionary algorithms presented in this dissertation were developed over several years. The development was carried out through several projects with adaptations made for each project as needed. Every problem is slightly different, and thus the decisions made in the structuring of the algorithm depended on the needs of this specific problem. The specific EA used in this work was synthesized using components from techniques outlined in [8]. One key feature of the algorithm used was the use of a separate mode-matching code. The mode matching was conducted with a compiled FORTRAN executable, which provided a relatively quick calculation of the scattering parameters. Passing of the parameters and variables was handled through ASCII files and read/write input and output between the executable and the Matlab scripts being used to prototype the algorithm. Now that the algorithm is developed, future use would be greatly accelerated by writing the entire algorithm in a single FORTRAN code, which could be run on a supercomputer. Even with the slower prototype version developed, only a few generations were required to converge to an acceptable design with a population size of 100. The actual time required to determine a typical design was approximately 1-2 hours of computer time using a 3.7 GHz Intel Core i7-4820K CPU with 64 GB of RAM. 97 6.2.1 Chromosome Structure The proposed verification standards fully span the width of the waveguide but only partially fill the height. Because their composition is metallic, the surrogates possesses little loss and can safely be modeled as being made from a perfect electric conductor. The length of the surrogate is specified, and divided into NL sections of equal length ∆L . The heights of these sections are determined by an EA such that the material parameters extracted using the NRW method best match those of chosen target values. The resulting structure consists of a number of vertical steps that can be easily machined, and can also be easily analyzed using standard mode-matching approaches [14]. To reduce the complexity of the problem, and to produce a structure that mimics the symmetry of actual samples, the steps are specified as mirrored pairs, resulting in S11 = S22 . In addition to the symmetry requirement, the following constraints are applied to the designs: 1. The height of the waveguide is discretized into NH identical values of height ∆H , and the vertical dimension of each section is chosen as a multiple of ∆H . The maximum allowed height is (NH − 1)∆H while the minimum allowed height is ∆H . This ensures that the structure does not block the waveguide completely and that it may be machined as one continuous piece. 2. The length of the surrogate is chosen such that the logarithm function used in the NRW method to extract the material parameters occupies a single Riemann surface for the entire frequency band. This prevents the discontinuous jumps in permittivity and permeability that occur when the surface boundaries are crossed as frequency changes. It is important to carefully consider the number of sections and discretized heights to use. Allowing larger values of NL and NH produces greater design flexibility, but also creates 98 a much larger search space for the EA to explore. A well designed EA will navigate the solution space toward optimal points, but it can take significantly more time to find better designs if the problem is very large. Real number components were used for many of the algorithms operators to allow any number of divisions to be used, and divisions into 10 or 20 portions were both examined. Ultimately, 20 divisions of the height was sufficiently flexible to allow good designs to be found. Designs with 14 length segments were considered during the initial phases of the project. Several good candidate solutions were found, but some were fairly intricate with tall, thin fins projecting from a solid base. The overall length of the surrogate is restricted by the Riemann surface requirement, and for higher-frequency waveguide bands the sections become very narrow. Deformation from stress relaxation can occur when machining, or gaps in metal coverage can occur when printing and adding a metal layer. A smaller EA problem was created by setting the heights of the outside three sections to be the same, and also the heights of the next two sections and again the two innermost sections. This had the effect of reducing the number of length segments and giving them greater lengths. The most successful designs ultimately came from considering these 6 segment designs. 6.2.2 Fitness Measures The fitness function is constructed from a summation of weighted terms. First, the theoretical S-parameters, as determined from mode matching, are used to compute values of ǫ and µ through use of the NRW method. Then the core fitness terms of the sum are calculated from the difference between these extracted values of permittivity and permeability and the target values specified by the designer. To provide a greater contrast at values close to the target, the dB scale, 20 log10 (x), was used for the fitness factors. Any factor truly matching the target would thus have a fitness of −∞. To avoid any single match drowning out others factors, this was capped at -200 dB instead. The best core fitness would thus be -800 dB times the number of frequencies used. 99 An additional set of terms are based on the variation of the extracted parameters across the frequency band (with the goal to produce parameters as frequency-independent as possible). Specifically, if the magnitude of the difference of the largest terms and the smallest terms for the real parts of ǫ and µ are less than 20 dB each, a small fitness bonus is applied. Finally, additional fitness factors were added for designs with averages of core fitness terms that are negative, and thus closer than 1 to the target value. The weights of these bonus terms were adjusted carefully depending on the number of frequencies used in order to avoid outweighing the primary fitness factors. 6.2.3 Selection Process and Inheritance Two pools of prospective parents were chosen. The first consisted of the 9 unique, most fit members of the previous generation. These 9 were then copied until filling a pool equal in size to the population. The second pool consisted of the top 50% of the population copied one time to again create a pool the size of the total population. Since this did not control for uniqueness, more fit designs could have multiple copies within this second pool. Using two different pools of parents allowed the preservation of some diversity while still promoting overall better fitness. The first set of unique parents was directly copied to the next generation. This elite preservation ensured that 9 best designs would always survive. The remainder of the population pool was then filled with children of the two mating pools. The two pools were randomly sorted, and then after skipping 9, the i and i+1 member of each pool were taken. The prospective parent which had the better fitness value was chosen from each of these pool member pairs. With two parents thus selected, a crossover operation was performed. This process for crossover followed the following algorithm: • a = random gene position, b = random gene position, r = random number 0 − 10% 100 • If a ≤ b child genes 1 : a = (r, p1 genes 1 : a) + (1 − r, p2 genes 1 : a) child genes a : b = (1 − r, p1 genes a : b) + (r, p2 genes a : b) child genes b : n = (r, p1 genes b : n) + (1 − r, p2 genes b : n) • If a > b child genes 1 : b = (1 − r, p1 genes 1 : b) + (r, p2 genes 1 : b) child genes b : a = (r, p1 genes b : a) + (1 − r, p2 genes b : a) child genes a : n = (1 − r, p1 genes a : n) + (r, p2 genes a : n) A rotation operator was added to shift all of a child’s genes randomly left or right one position with a small probability or two positions with a very small probability. This increased the exploration of the space to consider cases where small groups of segments might be shifted off of their optimal location in the design. This was only applied to 1/8 of the created children. 6.2.4 Mutation Multiple levels of mutation were applied to different portions of the children generated each generation. This layering of mutation rates was a form of diversity preservation intended to have parts of the new generation explore different regions of the design space. Some children would vary little from their parents, while others would be very different and provide the opportunity to discover features that were not present in previous generations. These rates had to be carefully determined over time since too much mutation can cause an algorithm to have great difficulty in converging. Note that all mutations were capped by the same upper and lower bounds allowed for the segment height divisions. All children had a 100% chance of a very small pseudo-Gaussian mutation to each gene. Since the genes were integers, a standard deviation of 1 was used for the mutation. Effectively, 101 this meant that each gene had a very small chance of changing by more than 1, either positive or negative. All children also had a 1/7 chance of a much larger pseudo-Gaussian mutation with a standard deviation of 10 applied to each gene. Finally, 1/4 of the children (including the 1/8 undergoing rotation) had a 50% chance of pseudo-Gaussian mutation with a standard deviation of 5 applied to each gene. In summary, 50% of the population had very small differences introduced from the genes of the parents (this includes the fact that the child’s genes came from a weighted average of the parents’ genes), while the other 50% underwent varying levels of extreme mutation. With these operations built into the algorithm it is highly unlikely that the quick convergence experienced was the result of the algorithm becoming stuck on a local optima. 6.3 Fabrication and Testing of a Brass WR-284 Standard A verification standard was designed for an S-Band waveguide of width 2.84 inch (72.14 mm) and height 1.34 inch (34.04 mm), covering an operating band of 2.6-3.95 GHz. The structure is 0.7 inch (17.78 mm) long and is divided into six sections, alternating between 0.1 inch (2.54 mm) and 0.15 inch (3.81 mm) in length, as shown in Table 6.1. Each section was discretized into NH = 20 heights of ∆H = 0.067 inch (1.7018 mm). A target permittivity of 6 − j0 and permeability of 6 − j0 were chosen because of their interest to the end user, and the EA was run to determine the heights of the sections. Several adequate results were obtained, with the dimensions of the surrogate most amenable to fabrication described in Table 6.1. The verification standard described in Table 6.1 was machined from brass stock; the fabricated structure is shown in Figure 6.1. Note that this particular standard has a very low profile and thus is lightweight and easy to machine. Figure 6.2 shows the sample holder with the standard in place. Use of a NanoMap-500LS Contact Surface Profilometer over 102 12.7 mm of the top surface of the standard, with a data resolution of 1.3 µm, revealed a standard deviation of the surface elevation to be 1.3139 µm. The S-parameters of the sample were measured using an Agilent E5071C network analyzer with two coaxial-to-waveguide transitions attached to two 5 inch (127 mm) waveguide sections. Calibration was carried out using the TRL procedure with a 1.1875 inch (30.1625 mm) sample holder placed between the 5 inch waveguide sections. The verification standard was then placed into the sample holder with its front face flush with the calibration plane of port 1. Deembedding was used to obtain the scattering parameters referenced to the front and rear faces of the verification standard. Table 6.1 Dimensions of the Tested WR 284 Standard. Section Width (mm) Height (mm) 1 2 3 4 5 6 2.54 3.81 2.54 2.54 3.81 2.54 0.34036 0.8509 0.34036 0.34036 0.8509 0.34036 103 Figure 6.1 Brass WR-284 verification standard. 104 Figure 6.2 WR-284 waveguide sample holder with verification standard in place. 105 The verification standard was measured 7 times, calibrating at the beginning and again between the fifth and sixth measurements. The standard was removed and reseated in the sample holder for each measurement. The averages of the extracted parameters are shown in Figures 6.3 and 6.5, along with the theoretical values found using mode matching. Excellent agreement is seen for both ǫ and µ across the band. The absolute differences between the parameters obtained from measurement and theory are shown in figures 6.4 and 6.6, along with the sample standard deviations for the measurements. The sample standard deviations are very small showing that measurement of the waveguide standard can be repeated with high reliability. The deviation from theory is actually slightly larger than the standard deviation of the measurements. The small deviation from theory is contributed to by systematic errors, which include any deviation from the dimensions of the design, contamination of the sample, scratches on the surface, and so on. 106 Measured Theory 7 Relative Permittivity 6 5 Real 4 3 2 Imaginary 1 0 -1 2.5 3 3.5 Frequency (GHz) Figure 6.3 Relative permittivity of the brass WR-284 verification standard. 107 4 Standard Deviation Deviation from Theory 0.08 Relative Permittivity Real Imaginary 0.06 0.04 0.02 0 -0.02 2.5 3 3.5 Frequency (GHz) 4 Figure 6.4 Measurement error and standard deviation of the measured permittivity for the brass WR-284 verification standard. 108 Relative Permeability Measured 10 9 8 7 6 5 4 3 2 1 0 -1 2.5 Theory Real Imaginary 3 3.5 Frequency (GHz) Figure 6.5 Relative permeability of the brass WR-284 verification standard. 109 4 Standard Deviation Deviation from Theory Relative Permeability 0.08 Real Imaginary 0.06 0.04 0.02 0 -0.02 2.5 3 3.5 Frequency (GHz) 4 Figure 6.6 Measurement error and standard deviation of the measured permeability for the brass WR-284 verification standard. 110 6.4 Fabrication and Testing of a 3-D-printed WR-284 Standard Three-dimensional printing is proving to be an affordable technology for fabricating waveguide components. The part is printed as a polymer object and then sputtered or electroplated with a metal coating. Care must be taken to ensure that the thickness of the metal coating is significantly larger than a skin depth for the frequencies of operation. In the case of S-band, one skin depth of copper is approximately 1.3 microns. A duplicate of the brass WR-284 verification standard described in the previous section was printed using an Objet Connex350 printer and VeroWhite material. Two small holes were printed into the part to allow electroplating wires to be inserted. After printing, the part was sputter coated with a 300 nm layer of titanium followed by a 1 micron layer of copper. The part was then electroplated with approximately 5 microns of copper to achieve a coating much thicker than a skin depth. Use of a NanoMap-500LS Contact Surface Profilometer over 12.7 mm of the top surface of the standard, with a data resolution of 1.3 µm revealed a standard deviation of the surface elevation to be 2.8396 µm. The fabricated part is shown in Figure 6.7. 111 Figure 6.7 3-D printed WR-284 verification standard with copper electroplating. 112 The 3-D-printed verification standard was measured 7 times, calibrating at the beginning and again between the fifth and sixth measurements. As with the brass standard, the 3D-printed standard was removed and reseated in the sample holder for each measurement. The averages of the extracted parameters are shown in Figures 6.8 and 6.10, along with the theoretical values found using mode matching. Excellent agreement is seen for both ǫ and µ across the band. The absolute differences between measurement and theory are shown in Figures 6.9 and 6.11, along with the sample standard deviations for the measurements. Note that both the errors and the standard deviations are somewhat higher than for the brass verification standard. Non-uniform deposition and chaffing of copper, a slight bowing of the printed part, and surface roughness contribute to the discrepancy. These will produce a greater difference between measurement and theory than present with the machined brass. Also, since the orientation of the verification standard was changed between measurements, non-uniformity will increase the measurement standard deviation. Regardless, the agreement between experiment and theory is quite good, giving a second fabrication option other than machining. 113 Measured Theory 7 Relative Permittivity 6 5 4 Real 3 2 Imaginary 1 0 -1 2.5 3 3.5 Frequency (GHz) Figure 6.8 Relative permittivity of the 3-D-printed WR-284 verification standard. 114 4 Standard Deviation Deviation from Theory Relative Permittivity 0.14 0.12 Real Imaginary 0.1 0.08 0.06 0.04 0.02 0 -0.02 2.5 3 3.5 Frequency (GHz) 4 Figure 6.9 Measurement error and standard deviation of the measured permittivity for the 3-D-printed WR-284 verification standard. 115 Relative Permeability Measured 10 9 8 7 6 5 4 3 2 1 0 -1 2.5 Theory Real Imaginary 3 3.5 Frequency (GHz) Figure 6.10 Relative permeability of the 3-D-printed WR-284 verification standard. 116 4 Standard Deviation Deviation from Theory Relative Permeability 0.12 0.1 Real Imaginary 0.08 0.06 0.04 0.02 0 -0.02 2.5 3 3.5 Frequency (GHz) 4 Figure 6.11 Measurement error and standard deviation of the measured permeability for the 3-D-printed WR-284 verification standard. 117 6.5 Conclusions A process has been presented to design and evaluate waveguide standards for verifying the electric and magnetic properties of materials that are extracted using the Nicolson-RossWeir method. Each standard consists of a slotted metallic block that is inserted into the waveguide in the same manner as actual sample materials. Thus, the steps used in the verification process mimic the steps taken to measure a sample under test. Because the verification standards are made of metal, they can be machined to high accuracy, and are thus highly repeatable. Two examples are presented for S-Band waveguides, with two different methods of fabrication. In each case the designs are determined by using a genetic algorithm to match the extracted parameters found from theoretical data to chosen target values. The material parameters extracted from measurements of the fabricated standards match well with those determined from theoretical data, and the relatively small standard deviations from repeated measurements demonstrate that consistent and repeatable results may be obtained using simple construction methods, such as machining or 3-D printing. While 3-D printing offers great weight and cost savings, the manufacturing tolerances are not yet as tight as for the machining process. Care is needed to ensure that the metallic coating is suitable and that the desired shape of the part is properly produced. The durability of the 3-D-printed standards is also not known. It is suspected that over time the metallic coating may need to be reapplied due to wear. Although the copper used in the present process is subject to oxidation, a small coating of a non-reactive metal such as gold might be used to protect the standard. As 3-D printing processes are improved, it is expected that the manufacturing tolerances will become comparable to more mature manufacturing techniques, and that direct 3-D printing of metallic parts may become affordable. 118 CHAPTER 7 Designing Optimized Structures to Benefit the Parameter Extraction of Conductor-Backed Materials 7.1 Introduction As was shown in Chapter 3 it is not possible to carry out parameter extraction for conductor backed samples using a single step waveguide with two reflection type measurements. Instead, a sample holder composed of multiple steps onto which a conductor backed sample can be placed is considered. Unlike the single step design, this approach utilizes both reflection and transmission measurements. Designs are identified using an evolutionary algorithm (EA) programmed to assign fitness based on the accuracy of the extractions carried out using synthetic measured data. In addition to the specially designed sample holders, this project required the development of a quicker and more reliable way to extract the parameters of the samples being measured, independent of which holder design is being used. The calculation of scattering parameters is accomplished using mode matching, while the inverse problem is approached 119 with a hybrid of an evolutionary algorithm and the secant method. The use of a very simple EA to generate initial guesses for the secant method significantly improves the stability of the solution process, even with measurement error. Since the EA is only run for a few generations, it does not add a large amount of time to the extraction process. Analysis of the effectiveness of this technique was carried out by comparing with the results for the simplest possible structure. This baseline design is a simple conducting pedestal which the sample rests on. The conductor backing is used as a part of the waveguide and should leave no gaps with the rest of the waveguide wall. The ultimate purpose of the project is to evolve a structure which minimizes the effect of measurement error on the extracted parameters for these conductor backed samples. 7.2 The Extraction Algorithm The process of solving the inverse problem is carried out in two stages. First a very simple EA is utilized to identify an appropriate initial guess for the values of the parameters ǫg and µg . Next a secant method implementation is used to identify the values of ǫ and µ that solve the system (S11 )m − S11 (ǫ, µ) = 0 (7.1) (S21 )m − S21 (ǫ, µ) = 0, (7.2) starting with the initial guesss ǫg and µg . (S11 )m is the measured value of S11 and (S21 )m is the measured value of S21 , while S11 (ǫ, µ) and S21 (ǫ, µ) are the theoretical values found using mode matching. For use in the design EA, all of the applications of the algorithm are carried out using synthetic measured data. Pseudo-Gaussian noise is added to the scattering parameters calculated using the true parameter values just as was done in Chapter 6. These noisy 120 scattering parameters are then used in place of measured data. The characteristics of the noise were based on the network analyzer reported error. The standard deviation of |S11 | is 0.004, the standard deviation of the phase of S11 is 0.8 deg., the standard deviation of |S21 | is 0.004, and the standard deviation of the phase of S21 is 2.0 deg. 7.2.1 The Guess Generating EA To determine a reasonable guess of ǫ and µ for the material under test, a very simple EA is applied for a limited number of generations. Once the guess is determined, a secant method is employed to actually solve for the parameters. A population of 20 guesses is used with parameter values bounded by values provided by the user. When used with the design EA, 5 generations were carried out before switching to the secant method, unless an acceptable guess was generated early. 7.2.2 Guess Generating Algorithm, Chromosome Structure Since the object of the algorithm is to identify the parameters ǫ and µ, the genes of the chromosome are the 4 real number components that compose the two complex parameters. Since there are an infinite number of possible parameter values, this is a difficult problem for the EA to solve. Upper and lower bounds are provided to prevent wildly inaccurate guesses from being considered and wasting computational time. 7.2.3 Guess Generating Algorithm, Fitness Measures The fitness of the guesses is defined as the max of two expressions. The first expression is the dB magnitude of the difference between the guess generated S11 and the measured S11 . The second expression is a weighted sum of two terms. The first term is the dB magnitude of the difference between the guess generated S21 and the measured S21 . The second term is 1/100 of the largest of two terms, the dB magnitude of the difference between the real 121 parts of S11 and (S11 )m or the dB magnitude of the difference between the imaginary parts of S11 and (S11 )m . This simple fitness function was chosen to moderate the number of S-parameter calculations required. Ultimately, the quality of the guess is determined by its worst scattering parameter match. Note that the algorithm is set to terminate before the generation limit if the fitness of the best member of the previous generation falls below -40 dB (computed using 20 log10 (x)). 7.2.4 Guess Generating Algorithm, Selection and Reproduction There are three different mechanisms used to produce children for the EA. One child is an exact copy of the most fit member of the previous generation. The second child is the gene by gene average of the five most fit members of the previous generation. The third mechanism uses the second child as a template. The children generated from this third method are randomly distributed about the template, where each gene is taken from a pseudo-Gaussian distribution centered on the gene from the template with a standard deviation of  Up −Lp      2∗Gn      Up −Lp 200∗Gn p = ǫR , µ R , (7.3) p = ǫI , µ I where Gn is the generation number, Up is the parameter specific upper bound, and Lp is the parameter specific lower bound. This gradual reduction of the standard deviation with increasing generation is being used to help the algorithm converge quickly. Since the imaginary parts are smaller for many materials, the variation in these genes needs to get smaller much faster. 7.2.5 Guess Generating Algorithm, Mutation For use within the framework of the EA implemented to design the measurement structures, computational speed is essential. For this reason, no additional mutation was applied. This 122 can lead to the guess generator not finding the true region for the best guess of ǫ and µ for extractions with larger amounts of noise. Supplying a bad guess to the secant method then results in a lack of convergence to the correct parameter values. A safety mechanism was built into the design EA to compensate for this, which will be described later. For use as an extraction routine with actual measured data, addition of a simple pseudoGaussian mutation operator will be applied to one fourth of the children created each generation. This wider scale mutation applied to a subset allows more confidence in the guesses created, at the cost of additional computational time. A few additional generations are needed to compensate for the added convergence difficulty (caused by more children potentially being created far from the optimal region and wasting computational time). 7.2.6 The Secant Method as Implemented The general explanation of the secant method is given in Chapter 5. The specific application used for this project is provided in this chapter. First two functions are defined. These are f (ǫ, µ) = S11 (ǫ, µ) − (S11 )m (7.4) g (ǫ, µ) = S21 (ǫ, µ) − (S21 )m . (7.5) Next, a scale factor and two step sizes are defined dZ = 2 max ( abs (f ) , abs (g)) (7.6) dǫ = (1 + 1j ) dZ (7.7) dµ = (1 − 1j ) dZ. (7.8) Then four numerical partial derivatives are calculated. These are as follows: fǫ = f (ǫ + dǫ, µ) − f (ǫ − dǫ, µ) 2dǫ 123 (7.9) fµ = f (ǫ, µ + dµ) − f (ǫ, µ − dµ) 2dµ (7.10) gǫ = g (ǫ + dǫ, µ) − g (ǫ − dǫ, µ) 2dǫ (7.11) g (ǫ, µ + dµ) − g (ǫ, µ − dµ) . 2dµ (7.12) gµ = The step sizes of the first cycle are dǫ = dZ −f (ǫ, µ) gµ + g (ǫ, µ) fµ fǫ g µ − g ǫ fµ (7.13) dµ = dZ −g (ǫ, µ) fǫ + f (ǫ, µ) gǫ . fǫ g µ − g ǫ fµ (7.14) Then the next approximation of ǫ and µ is calculated as ǫ = ǫ + dǫ (7.15) µ = µ + dµ. (7.16) After checking to see if the the convergence criteria have been met, the cycle repeats with Equations (7.9),(7.10),(7.11), (7.12), dǫ = −f (ǫ, µ) gµ + g (ǫ, µ) fµ , and fǫ g µ − g ǫ fµ (7.17) −g (ǫ, µ) fǫ + f (ǫ, µ) gǫ fǫ g µ − g ǫ fµ (7.18) dµ = (provided the denominator is non-zero; if it is zero, then the numerators are multiplied by 1×10−15 instead), and Equations (7.15) and (7.16). This cycle is repeated until convergence occurs or a maximum allowed number of iterations have occurred. 124 7.3 The Design Algorithm The core function of the design EA is to identify potential structures that minimize the error of the parameters extracted from measured data. Once these prospective designs are identified, the EA must look for incremental improvements. For this dissertation, an S-Band waveguide (1.34 inches by 2.84 inches, 2.6-3.95 GHz) is used to demonstrate the process, but any waveguide band could potentially be used. The dimensions of the designs would need to be determined for each band separately, but a proportional scaling should provide a reasonable initial starting point. 7.3.1 Design Algorithm, Chromosome Structure As with the verification standards presented in Chapter 6, the optimization structures fully span the width of the waveguide but only partially fill the height. They are also modeled as a perfect electric conductor. The length of the structures was arbitrarily chosen to be similar to the verification standards because of the experience from working with those dimensions. Longer or shorter lengths could be chosen, but the reliability of manufacture for such dimensions would need to be investigated. The heights of the waveguide sections are determined by an EA such that the material parameters estimated for the test samples have the lowest possible error for a typical experiment. Asymmetric designs may be investigated in the future, but for this dissertation, mirrored pairs with S11 = S22 are again used for these structures. The following additional constraints are applied to the designs: 1. The height of the waveguide is discretized into NH identical values of height ∆H , and the vertical dimension of each section is chosen as a multiple of ∆H . The maximum allowed height is (NH − 2)∆H while the minimum allowed height is ∆H . This ensures that the structure does not block the wave from interrogating the sample and that it 125 may be machined as one continuous piece. As with the verification standards, 20 divisions of the height provided sufficient flexibility to allow good designs to be found. 2. The sample holder sits in the center of the design and has a 1.215 inch height and a 0.5 inch length. This is the same as the baseline design with which all results will be compared. The sample thickness of 0.125 inch fills out the rest of the full height of the S-Band waveguide, which is 1.34 inches. Eight segments (four on each side of the sample holder) were considered to be a reasonable starting point for the designs. The outermost segments were chosen to be 0.15 inch in length down the waveguide, while the inner six were each chosen to be 0.1 inch long. Thus the total length of the structure is always 1.4 inches. 7.3.2 Design Algorithm, Fitness Measures A two dimensional fitness measure was selected for design of the structures. Using two fitness functions allows the algorithm to simultaneously optimize the structures for the reduction of error in extraction of ǫ and µ. Both fitness functions are constructed from weighted sums. Three frequencies, which span the frequency band, were used to calculate the fitness values for this project. These frequencies are 2.6, 3.35, and 3.95 GHz. In doing actual characterization, it is common to average as many as 64 scattering parameter data sets, and such measurements are repeated at least 5 times to establish ǫ and µ. Since the synthetic measured data is generated randomly, 5 such measurements were averaged, and the extracted parameters calculated from this average. This was felt to adequately reflect the randomness present in real measurements for the purpose of designing the error reducing structures. More measurements would potentially produce better results, but the computation time of the fitness function goes up with each additional synthetic measurement used. Ultimately, this was considered to be a reasonable compromise. The dB (computed from 20 log10 (x)) magnitude of the difference for real and imaginary 126 parts are calculated for ǫ and µ. The cap for each frequency and parameter component is chosen as -200 dB to avoid a single good match from overwhelming the other terms. The terms for ǫ are gathered for the first fitness function and the terms for µ form the second. In the event that the extraction algorithm is unable to converge to an answer for a given frequency, the parameters are both specified as 0. For all but lossless materials this results in poor fitness terms and the design is thus punished for failing to allow a valid extraction. 7.3.3 Design Algorithm, Selection and Reproduction Two distinct methods of selection were applied simultaneously. This was done to take advantage of both diversity preservation and elitism at the same time. The algorithm used here was a simplified implementation of the NSGA-II algorithm [16] developed for this project. The first selection method is referred to in this dissertation as vector selection. The basic idea is to create unit vectors representing different weighting factors for linear combinations of the two fitness functions. These are then used to find representatives among the population according to the expression (v1 , v2 ) (f1 , f2 ) · | (v1 , v2 ) | | (f1 , f2 ) | | (f1 , f2 ) | = cos (θ) | (f1 , f2 ) |, (7.19) where θ is the angle between the fitness vector (f1 , f2 ) for a given population member and the unit weight vector (v1 , v2 ) which it is being considered against. Taking the inner product of the two vectors and dividing by their magnitudes gives the cosine of the angle between them. Multiplying by the square root of the magnitude of the fitness vector scales this by a portion of the fitness vectors magnitude. This quantity is maximized when the population member is very fit and the angle with the unit weighting vector is minimized. This quantity is calculated for each member of the population using each of seven weight vectors from (1, 0) to (0, 1). The population is sorted according to this scale for each vector, and the member with the largest value is stored in an archive. Note that this could provide duplication since 127 one member may have the largest value for more than one weight vector, especially if it is very fit and thus has a large value for | (f1 , f2 ) |. The second selection is a simplified version of the non-dominated sorting concept. This concept was explained in Chapter 5. Ordinarily, the members of the population would be separated into front lines of non-domination, and members would be selected by front order up to a set number of members. To streamline this, the algorithm instead uses a scale based on the number of members that dominate the member in question and archives the top seven members after sorting. The list is sorted according to descending order of fitness with respect to f1 . Then going through the list in order, all members after the ith member are compared to member i. If they are dominated, they are assigned a domination count. If they are weakly dominated they are assigned a half count. To avoid duplicate members, a member found to be exactly equal to another point is assigned 14 domination counts instead of 1, thus ensuring it will never be taken in the archive or to the next generation. The 7 members of the vector archive are preserved to the next generation directly. The top 14 members according to the domination sorting are preserved directly as well. The remaining 39 population slots are filled by children generated using the 7 vector leaders and the top 7 by domination sorting. First, 49 prospective children are generated by taking all possible pairings of the parents from each selection method. There is a 50% chance of inheriting a gene from either parent. Each gene has a pseudo-Gaussian mutation applied with a standard deviation of 10 (this is still bounded by the constraints as before). The prospective children are evaluated for the fitness parameters and sorted by the domination method. The top 39 members are then added to the population. 7.3.4 Additional Comments About the Design Algorithm For this project, a population of 60 was used. Typical runs of the EA took 6-8 hours on a laptop computer using an Intel Core i7-3612QM CPU @ 2.1 GHz with 8 GB RAM and 128 Windows 7-64 bit. The results were acceptable for the purposes of this project, but further development of the algorithm should improve run time. 7.4 7.4.1 Evaluating Prospective Designs Baseline Design With a reliable way to determine ǫ and µ from the scattering parameters and a design algorithm able to generate possible error reducing structures, the next step in the project was to establish a baseline of comparison for the designs. The baseline chosen was the simplest structure available, a single pedestal on which the sample sits. This also serves as the sample holder for all of the EA selected designs, thus if the designs are to be useful, they ought to provide more accurate extractions than the sample holder alone. Investigation of the baseline was carried out using three different sets of material parameters and synthetic measurement data. The three materials were Teflon (ǫ = (2.1, − 0.004) , µ = (1.0, 0.0)), MagMat1 (ǫ = (7.258, − 0.056) , µ = (0.6, − 0.0761)) (this is a hypothetical material with a low magnetic loss), and VeroWhite (ǫ = (2.882, − 0.06751) , µ = (1.0, 0.0)). VeroWhite is a material used in an available 3D printer, which has been characterized for SBand. While in reality, the parameters do change with respect to frequency, for this project they were treated as constant. Since the changes are usually slight over the band, this is an acceptable approximation. A more comprehensive set of frequencies was used to establish the error than the three used for the EA. The frequencies used are 2.6, 2.85, 3.1, 3.35, 3.6, 3.85, and 3.95 GHz. 7.4.2 Test Methodology At each frequency, the test algorithm attempts to extract ǫ and µ 30 times as was done for the design EA. The average of the successful attempts is computed for each parameter. 129 The error is then computed as the standard deviation of the successful extractions, averaged over the 7 frequencies. Results for the baseline are presented in Table 7.1, note that the experiment was repeated once. Table 7.1 Results for the baseline design: average of the standard deviation. 7.4.3 Material Run 1, ǫR Run 2, ǫR Run 1, ǫI Run 2, ǫI Teflon 0.0387 0.0379 0.0169 0.0174 MagMat1 0.3191 0.3604 0.1475 0.1627 VeroWhite 0.0673 0.0635 0.0325 0.0280 Material Run 1, µR Run 2, µR Run 1, µI Run 2, µI Teflon 0.0256 0.0272 0.0118 0.0137 MagMat1 0.0224 0.0221 0.121 0.0117 VeroWhite 0.0234 0.0229 0.0111 0.0107 EA Results When using an EA it is essential to remember that the results obtained are to some extent random. For this reason, it was decided that several materials would be used as the material under test during multiple runs of the EA. Different sets of material parameters provide 130 different conditions for the algorithm’s operators to function within. This has a tendency to emphasize different features and characteristics for an optimal solution. It is also important to realize that it is highly unlikely that one design will truly be the best for all possible samples. The purpose of this project was simply to find some acceptable designs which perform better than the baseline design. 7.4.4 EA Results When Optimizing With Teflon The design EA was run for 20 generations, using Teflon as the test specimen. Run time was approximately 24 hours. Figure 7.1 shows the optimal front of the population in generation 0 as compared to the optimal front of the last generation (recall that the algorithm is seeking a minimum for these fitness functions). Figure 7.2 shows the last non-dominated front versus the general population for the last generation, along with some of the vector sorting lines for reference. 131 Figure 7.1 Teflon optimal front progression, after 20 generations. 132 Figure 7.2 Teflon optimal front with general population, after 20 generations. 133 After reviewing the results of the EA run, a single design was selected for further testing and comparison to other designs. Figure 7.3 shows a rough drawing of the design, which is not to scale. Results from statistical analysis, as was done for the baseline design, are given in Table 7.2 along with actual design dimensions. Recall that segments fully fill the cross sectional width of the waveguide. Figure 7.3 Teflon optimal design, rough sketch. 134 Table 7.2 Dimensions and results for the Teflon design: average of the standard deviation. 7.4.5 Step Segment Guide Length Height 1 2 3 4 Sample Holder 5 6 7 8 0.15 inch 0.10 inch 0.10 inch 0.10 inch 0.5 inch 0.10 inch 0.10 inch 0.10 inch 0.15 inch 1.139 inches 1.206 inches 0.84 inch 1.005 inches 1.215 inches 1.005 inches 0.84 inch 1.206 inches 1.139 inches Run ǫR ǫI µR µI 1 2 0.0035 0.0033 0.0167 0.0158 0.0038 0.0037 0.0078 0.0075 EA Results When Optimizing With MagMat1 Next the EA was run using MagMat1 as the test specimen. This material does not actually exist, but it was of interest for use as a theoretical sample. Run time was again approximately 24 hours. Figure 7.4 shows the optimal front of the population in Generation 0 as compared to the optimal front of the last generation. Figure 7.5 shows the last non-dominated front versus the general population for the last generation, along with some of the vector sorting lines for reference. 135 Figure 7.4 MagMat1 optimal front with general population, after 20 generations. 136 Figure 7.5 MagMat1 optimal front progression, after 20 generations. 137 After reviewing the results of the EA run, a single design was selected for further testing and comparison to other designs. Figure 7.6 shows a rough drawing of the design, which is not to scale. Results from statistical analysis, as was done for the baseline design, are given in Table 7.3 along with actual design dimensions. Figure 7.6 MagMat1 optimal design, rough sketch. 138 Table 7.3 Dimensions and results for the MagMat1 design: average of the standard deviation. 7.4.6 Step Segment Guide Length Height 1 2 3 4 Sample Holder 5 6 7 8 0.15 inch 0.10 inch 0.10 inch 0.10 inch 0.5 inch 0.10 inch 0.10 inch 0.10 inch 0.15 inch 1.206 inches 1.206 inches 1.206 inches 0.603 inch 1.215 inches 0.603 inch 1.206 inches 1.206 inches 1.206 inches Run ǫR ǫI µR µI 1 2 0.0661 0.0647 0.1341 0.1403 0.0054 0.0050 0.0098 0.0100 EA Results When Optimizing With VeroWhite Finally, the EA was run using VeroWhite as the test specimen. Run time was again approximately 24 hours. Figure 7.7 shows the optimal front of the population in generation 0 as compared to the optimal front of the last generation (recall that the algorithm seeks a minimum for these fitness functions). Figure 7.8 shows the last non-dominated front versus the general population for the last generation, along with some of the vector sorting lines for reference. 139 Figure 7.7 VeroWhite optimal front progression, after 20 generations. 140 Figure 7.8 VeroWhite optimal front with general population, after 20 generations. 141 After reviewing the results of the EA run, a single design was selected for further testing and comparison to other designs. Figure 7.9 shows a rough drawing of the design, which is not to scale. Results from statistical analysis, as was done for the baseline design, are given in Table 7.4 along with actual design dimensions. Figure 7.9 VeroWhite optimal design, rough sketch. 142 Table 7.4 Dimensions and results for the VeroWhite design: average of the standard deviation. Step Segment Guide Length Height 1 2 3 4 Sample Holder 5 6 7 8 0.15 inch 0.10 inch 0.10 inch 0.10 inch 0.5 inch 0.10 inch 0.10 inch 0.10 inch 0.15 inch Run ǫR ǫI µR µI 1 2 0.0039 0.0038 0.0555 0.0453 0.0022 0.0020 0.0107 0.0098 1.206 1.206 1.206 1.206 1.215 1.206 1.206 1.206 1.206 143 inches inches inches inches inches inches inches inches inches 7.5 Comparison of Designs To select a design for fabrication, results for all three materials in each design were compared. The results of this analysis are provided in Table 7.5. All three designs compare favorably with the baseline design, in some cases being an order of magnitude lower in standard deviation. As an example, the standard deviation in the real part of epsilon for the baseline with MagMat1 was approximately 0.33, while using the VeroWhite optimized design for the same material produces a standard deviation of approximately 0.03. After comparing the results and taking into account ease of manufacture, the VeroWhite optimized design was chosen for eventual fabrication. The results of attempting to fabricate this assembly using a 3D printer are discussed in Appendix 7.6 of this dissertation. 144 Table 7.5 Comparison of Designs: average of the standard deviation. Material, Teflon Design ǫR ǫI µR µI Teflon 0.0035 0.0167 0.0038 0.0078 MagMat1 0.0297 0.0945 0.0036 0.0068 VeroWhite 0.0056 0.0222 0.0036 0.0071 Material, MagMat1 Design ǫR ǫI µR µI Teflon 0.0046 0.0188 0.0065 0.0163 MagMat1 0.0647 0.1403 0.0050 0.0100 VeroWhite 0.0080 0.0285 0.0057 0.0149 Material, VeroWhite Design ǫR ǫI µR µI Teflon 0.0026 0.0368 0.0023 0.0109 MagMat1 0.0306 0.2463 0.0032 0.0126 VeroWhite 0.0039 0.0555 0.0022 0.0107 145 7.6 Conclusions The extraction algorithm is relatively fast, and can be used with actual measurement data to determine the material parameters of unknown samples using the structures identified by the design algorithm. While computationally intense, the algorithm for identifying the structures is easy to implement and can easily be used in combination with other materials as the material under test. Utilizing the EA to design error reducing structures for conductor backed measurements has produced excellent results. The structures identified theoretically provide nearly an order of magnitude lower error in extractions for the real parts of parameters over the baseline design. This is done without significantly altering the error in estimation of the imaginary parts from the baseline design. Further, these designs provide a way to extract the parameters of conductor backed materials without removing the conductor backing that is reasonably free from error provided the material is homogeneous and isotropic. 146 APPENDIX 147 Appendix : Attempt to Fabricate the VeroWhite Optimized Design Using 3D Printing An attempt was made to fabricate the VeroWhite optimized sample holder design from Chapter 7; see Table 7.4 for the part dimensions. This particular design was chosen for the quality of the extracted parameters when used with synthetic measurement data. As with the waveguide standards, the part was printed as a polymer object, sputtered with a 300 nm layer of titanium, and electroplated with 5-7 µm of copper. As before this was done to apply a thin layer of copper with thickness much larger than the S-Band skin depth of approximately 1.3 microns. The part was printed in two sections with several interlocking structures; see Figure A.10 and Figure A.11. With these structures, the separate sections could be snapped together into single piece as can be seen in Figure A.12. Flanges and the bolt holes were added to the exterior to allow connection to the standard rectangular waveguides. 148 Figure A.10 Base portion of the 3D printed structure. 149 Figure A.11 Top portion of the 3D printed structure. 150 Figure A.12 Combined portions of the 3D printed structure. 151 A hole was left in the top section 0.5 inch by 2.84 inches. This is exactly equal in size to the samples to be measured, see Figure A.13. When the sample is placed into the sample holder with the conductor backing facing upward, the top surface of the waveguide should align with the backing to form a continuous conducting wall. 152 Figure A.13 Hole for insertion of sample into the 3D printed structure. 153 Attempts to extract permittivity and permeability from the measured scattering parameters were unsuccessful. Looking carefully at the fabricated structure, there were clearly gaps in the metallization. The parts did not match up completely, which left gaps in the sides of the waveguide section; see Figure A.14. Also, there was a noticeable distortion in the structure’s shape. This tilting of the part made several of the bolt holes become misaligned. This misalignment made the gaps on the sides even worse; see Figure A.15. Direct comparison of the scattering parameters to theoretical values showed large differences. Figure A.14 Gaps on the sides of the 3D printed structure. 154 Figure A.15 Gaps on the sides of the 3D printed structure after being bolted to the standard waveguides. 155 In an attempt to mitigate these flaws, copper tape was used to hold the sides of the structure near the waveguide portion, while painter’s tape was used to hold the flange sides together. Figure A.16 shows the result. Unfortunately, the scattering parameters remain very different from theory. 156 Figure A.16 Taping the sides of the 3D printed structure. 157 The next attempt at fabricating the design will involve printing an insertable version of the sample holder, which will fit inside an existing section of S-Band waveguide. Given the experience with the waveguide standards, these should prove far easier to fabricate and to metallize properly. 158 BIBLIOGRAPHY 159 BIBLIOGRAPHY [1] A. M. Nicolson and G. F. Ross. Measurement of the intrinsic properties of materials by time-domain techniques. IEEE Transactions on Instrumentation and Measurement, IM-19(4):377–382, November 1970. [2] W. B. Weir. Automatic measurement of complex dielectric constant and permeability at microwave frequencies. Proceedings of the IEEE, 62(1):33–36, January 1974. [3] R. A. Fenner, E. J. Rothwell, and L. L. Frasch. A comprehensive analysis of freespace and guided-wave techniques for extracting the permeability and permittivity of materials using reflection-only measurements. Radio Science, 47, February 2012. [4] Benjamin Reid Crowgey. Rectangular Waveguide Material Characterization: Anisotropic Property Extraction and Measurement Validation. PhD dissertation, Michigan State University, 2013. [5] R. F. Harrington. Time-Harmonic Electromagnetic Fields. IEEE Press, Piscataway, NJ, 2001. [6] M. W. Hyde IV and M. J. Havrilla. A nondestructive technique for determining complex permittivity and permeability of magnetic sheet materials using two flanged rectangular waveguides. Progress in Electromagnetics Research, 79:367–386, 2008. [7] Edward J. Rothwell and Michael J. Cloud. Electromagnetics. Taylor & Francis Group, LLC, Boca Raton, FL, 2nd edition, 2009. [8] Kalyanmoy Deb. Multi-Objective Optimization using Evolutionary Algorithms. John Wiley & Sons, Inc., Chichester, United Kingdom, 2nd edition, 2001. [9] Erwin Kreyszig. Adavanced Engineering Mathematics. John Wiley & Sons, Inc., Hoboken, NJ, 9th edition, 2006. [10] J. E. Dennis Jr. and Robert B. Schnabel. Numerical Methods for Unconstrained Optimization and Nonlinear Equations. Prentice Hall, Inc., Englewood Cliffs, NJ, 2006. [11] J. Kiefer. Sequential minimax search for a maximum. Proceedings of the American Mathematical Society, 4(3):502–506, 1953. [12] Francis B. Hildebrand. Introduction to Numerical Analysis. Dover Publications Inc., Mineola, NY, 2nd edition, 1974. [13] Randy L. Haupt and Sue Ellen Haupt. Practical Genetic Algorithms. John Wiley & Sons, Inc., New York, NY, 1998. [14] B. R. Crowgey, J. Tang, E. J. Rothwell, B. Shanker, and L. C. Kempel. A waveguide verification standard design procedure for the microwave characterization of magnetic materials. Progress in Electromagnetics Research, 150:29–40, 2015. 160 [15] E. J. Rothwell, J. L. Frasch, S. M. Ellison, P. Chahal, and R. O. Ouedraogo. Analysis of the nicolson-ross-weir method for characterizing the electromagnetic properties of engineered materials. Progress in Electromagnetics Research, 157:31–47, 2016. [16] K. Deb, A. Pratap, S. Agarwal, and T. Meyarivan. A fast and elitist multi-objective genetic algorithm: NSGA-II. IEEE Transactions of Evolutionary Computation, 6:182– 197, April 2002. 161