You are here
Search results
(1 - 20 of 87)
Pages
- Title
- Theoretical analysis of electronic, thermal, and mechanical properties in gallium oxide
- Creator
- Domenico Santia, Marco
- Date
- 2019
- Collection
- Electronic Theses & Dissertations
- Description
-
In recent years, Ga2O3 has proven to be a promising semiconductor candidate for a widearray of power electronics and optoelectronics devices due to its wide bandgap, high breakdownvoltage, and growth potential. However, the material suffers from a very low thermalconductivity and subsequent self-heating issues. Additionally, the complexity of the crystalstructure coupled with the lack of empirical data, has restricted the predictive power of modellingmaterial properties using traditional...
Show moreIn recent years, Ga2O3 has proven to be a promising semiconductor candidate for a widearray of power electronics and optoelectronics devices due to its wide bandgap, high breakdownvoltage, and growth potential. However, the material suffers from a very low thermalconductivity and subsequent self-heating issues. Additionally, the complexity of the crystalstructure coupled with the lack of empirical data, has restricted the predictive power of modellingmaterial properties using traditional methods. The objective of this dissertation is toprovide a detailed theoretical characterization of material properties in the wide bandgapsemiconductor Ga2O3 using first-principles methods requiring no empirical inputs. Latticethermal conductivity of bulk β − Ga2O3 is predicted using a combination of first-principlesdetermined harmonic and anharmonic force constants within a Boltzmann transport formalismthat reveal a distinct anisotropy and strong contribution to thermal conduction fromoptical phonon modes. Additionally, the quasiharmonic approximation is utilized to estimatevolumetric effects such as the anisotropic thermal expansion.To evaluate the efficacy of heat removal from β − Ga2O3 material, the thermal boundaryconductance is computed within a variance-reduced Monte-Carlo framework utilizingfirst-principles determined phonon-phonon scattering rates for layered structures containingchromium or titanium as an adhesive layer between a β − Ga2O3 substrate and Au contact.The effect of the adhesive layer improves the overall thermal boundary conductancesignificantly with the maximum value found using a 5 nm layer of chromium, exceeding themore traditional titanium adhesive layers by a factor of 2. This indicates the potential ofheatsink-based thermal management as an effective solution to the self-heating issue.Additionally, this dissertation provides a detailed characterization of the effect of strainon fundamental material properties of β−Ga2O3 . Due to the highly anisotropic nature of thecrystal, the effect strain can have on electronic, mechanical, and optical properties is largelyunknown. Using the quasi-static formalism within a DFT framework and the stress-strainapproach, the effect of strain can be evaluated and combined with the anisotropic thermalexpansion to incorporate an accurate temperature dependence. It is found that the elasticstiffness constants do not vary significantly with temperature. The computed anisotropyis unique and differs significantly from similar monoclinic crystal structures, indicating theimportant role of the polyhedral linkage to the reported anisotropy in material properties.Lastly, the dependence of the dielectric function with respect to strain is evaluated using amodified stress-strain approach. This elasto-optic, or photoelastic, effect is found to be significantfor sheared crystal configurations. This opens up a potential unexplored applicationspace for Ga2O3 as an acousto-optic modulation device
Show less
- Title
- AUTOMATED PET/CT REGISTRATION FOR ACCURATE RECONSTRUCTION OF PET IMAGES
- Creator
- Khurshid, Khawar
- Date
- 2018
- Collection
- Electronic Theses & Dissertations
- Description
-
The use of a CT attenuation correction (CTAC) map for the reconstruction of PET image can introduce attenuation artifacts due to the potential misregistration between the PET and CT data. This misregistration is mainly caused by patient motion and physiological movement of organs during the acquisition of the PET and CT scans. In cardiac exams, the motion of the patient may not be significant but the diaphragm movement during the respiratory cycle can displace the heart by up to 2 cm along...
Show moreThe use of a CT attenuation correction (CTAC) map for the reconstruction of PET image can introduce attenuation artifacts due to the potential misregistration between the PET and CT data. This misregistration is mainly caused by patient motion and physiological movement of organs during the acquisition of the PET and CT scans. In cardiac exams, the motion of the patient may not be significant but the diaphragm movement during the respiratory cycle can displace the heart by up to 2 cm along the long axis of the body. This shift can project the PET heart onto the lungs in the CT image, thereby producing an underestimated value for the attenuation. In brain studies, patients undergoing a PET scan are often not able to follow instructions to keep their head in a still position, resulting in misregistered PET and CT image datasets. The head movement is quite significant in many cases despite the use of head restraints. This misaligns the PET and CT data, thus creating an erroneous CT attenuation correction map. In such cases, bone or air attenuation coefficients may be projected onto the brain which causes an overestimation or an underestimation of the resulting CTAC values. To avoid misregistration artifacts and potential diagnostic misinterpretation, automated software for PET/CT registration has been developed that works for both cardiac and brain datasets. This software segments the PET and CT data, extracts the brain or the heart surface information from both datasets, and compensates for the translational and rotational misalignment between the two scans. The PET data are reconstructed using the aligned CTAC, and the results are analyzed and compared with the original dataset. This procedure has been evaluated on 100 cardiac and brain PET/CT data sets, and the results show that the artifacts due to the misregistration between the two modalities are eliminated after the PET and CT images are aligned.
Show less
- Title
- DOWNLINK RESOURCE BLOCKS POSITIONING AND SCHEDULING IN LTE SYSTEMS EMPLOYING ADAPTIVE FRAMEWORKS
- Creator
- Abusaid, Osama M.
- Date
- 2018
- Collection
- Electronic Theses & Dissertations
- Description
-
The present expansions in size and complexity of LTE networks is hindering their performance and their reliability. This hindrance is manifested in deteriorating performance in the User Equipment’s throughput and latency as a consequence to deteriorating the E-node B downlink throughput. This is leading to the need of smart E Node Base with various capabilities adapting to the changing communication environment. The proposed work aims at developing Self Organization (SO) techniques and...
Show moreThe present expansions in size and complexity of LTE networks is hindering their performance and their reliability. This hindrance is manifested in deteriorating performance in the User Equipment’s throughput and latency as a consequence to deteriorating the E-node B downlink throughput. This is leading to the need of smart E Node Base with various capabilities adapting to the changing communication environment. The proposed work aims at developing Self Organization (SO) techniques and frameworks for LTE networks at the Resource Blocks (RB) scheduling management level. After reviewing the existing literature on Self Organization techniques and scheduling strategies that have been recently implemented in other wireless networks, we identify several contrasting needs that can jointly be addressed. The deployment of the introduced algorithms in the communication network is expected to lead to improved and upgraded overall network performance. The main feature of the LTE networks family is the feed-back that the cell receives from the users. The feedback includes the down link channel assessment based on the User Equipment (UE) measure Channel Quality Indicator (CQI) in the last Transmission Time Interval (TTI). This feed-back should be the main decision factor in allocating Resource Blocks (RBs) among users. The challenge is how could one maps the users’ data onto the RBs based on the CQI. The Thesis advances two approaches towards that end:- the allocation among the current users for the next TTI should be mapped, consistent with historical feed-back CQI received from users over prior transmission durations. This approach also aims at offering a solution to the bottle-neck capacity issue in the scheduling of LTE networks. To that end, we present an implementation of a modified Self Organizing Map (SOM) algorithm for mapping incoming data into RBs. Such an implementation can handle the collective cell enabling our cell to become smarter. The criteria in measuring the E-node Base performance include throughput, fairness and the trade-off between these attributes.- Another promising and complementary approach is to tailor Recurrent Neural Networks (RNNs) to implement optimal dynamic mappings of the Resource Blocks (RBs) in response to the history sequence of the Channel Quality Indicator CQI feedback. RNNs can successfully build its own internal state over the entire training CQI sequence and consequently make the prediction more viable. With this dynamic mapping technique, the prediction will be more accurate to changing time-varying channel environments. Overall, the collective cell management would become more intelligent and would be adaptable to changing environments. Consequently, a significant performance improvement can be achieved at lower cost. Moreover, a general tunability of the scheduling system becomes possible which would incorporate a trade-off between system complexity and QoS.
Show less
- Title
- Adaptive independent component analysis : theoretical formulations and application to CDMA communication system with electronics implementation
- Creator
- Albataineh, Zaid
- Date
- 2014
- Collection
- Electronic Theses & Dissertations
- Description
-
Blind Source Separation (BSS) is a vital unsupervised stochastic area that seeks to estimate the underlying source signals from their mixtures with minimal assumptions about the source signals and/or the mixing environment. BSS has been an active area of research and in recent years has been applied to numerous domains including biomedical engineering, image processing, wireless communications, speech enhancement, remote sensing, etc. Most recently, Independent Component Analysis (ICA) has...
Show moreBlind Source Separation (BSS) is a vital unsupervised stochastic area that seeks to estimate the underlying source signals from their mixtures with minimal assumptions about the source signals and/or the mixing environment. BSS has been an active area of research and in recent years has been applied to numerous domains including biomedical engineering, image processing, wireless communications, speech enhancement, remote sensing, etc. Most recently, Independent Component Analysis (ICA) has become a vital analytical approach in BSS. In spite of active research in BSS, however, many foundational issues still remain in regards to convergence speed, performance quality and robustness in realistic or adverse environments. Furthermore, some of the developed BSS methods are computationally expensive, sensitive to additive and background noise, and not suitable for a real4time or real world implementation. In this thesis, we first formulate new effective ICA4based measures and their corresponding robust adaptive algorithms for the BSS in dynamic "convolutive mixture" environments. We demonstrate their superior performance to present competing algorithms. Then we tailor their application within wireless (CDMA) communication systems and Acoustic Separation Systems. We finally explore a system realization of one of the developed algorithms among ASIC or FPGA platforms in terms of real time speed, effectiveness, cost, and economics of scale. Firstly, we propose a new class of divergence measures for Independent Component Analysis (ICA) for estimating sources from mixtures. The Convex Cauchy4Schwarz Divergence (CCS4DIV) is formed by integrating convex functions into the Cauchy4Schwarz inequality. The new measure is symmetric and convex with respect to the joint probability, where the degree of convexity can be tuned by a (convexity) parameter. A non4parametric (ICA) algorithm generated from the proposed divergence is developed exploiting convexity parameters and employing the Parzen window4based distribution estimates. The new contrast function results in effective parametric and non4parametric ICA4based computational algorithms. Moreover, two pairwise iterative schemes are proposed to tackle the high dimensionality of sources. Secondly, a new blind detection algorithm, based on fourth order cumulant matrices, is presented and applied to the multi4user symbol estimation problem in Direct Sequence Code Division Multiple Access (DS4CDMA) systems. In addition, we propose three new blind receiver schemes, which are based on the state space structures. These so4called blind state4space receivers (BSSR) do not require knowledge of the propagation parameters or spreading code sequences of the users but relies on the statistical independence assumption among the source signals. Lastly, system realization of one of the developed algorithms has been explored among ASIC or FPGA platforms in terms of cost, effectiveness, and economics of scale. Based on our findings of current stat4of4the4art electronics, programmable FPGA designs are deemed to be the most effective technology to be used for ICA hardware implementation at this time.In this thesis, we first formulate new effective ICA-based measures and their corresponding robust adaptive algorithms for the BSS in dynamic "convolutive mixture" environments. We demonstrate their superior performance to present competing algorithms. Then we tailor their application within wireless (CDMA) communication systems and Acoustic Separation Systems. We finally explore a system realization of one of the developed algorithms among ASIC or FPGA platforms in terms of real time speed, effectiveness, cost, and economics of scale.We firstly investigate several measures which are more suitable for extracting different source types from different mixing environments in the learning system. ICA for instantaneous mixtures has been studied here as an introduction to the more realistic convolutive mixture environments. Convolutive mixtures have been investigated in the time/frequency domains and we demonstrate that our approaches succeed in resolving the standing problem of scaling and permutation ambiguities in present research. We propose a new class of divergence measures for Independent Component Analysis (ICA) for estimating sources from mixtures. The Convex Cauchy-Schwarz Divergence (CCS-DIV) is formed by integrating convex functions into the Cauchy-Schwarz inequality. The new measure is symmetric and convex with respect to the joint probability, where the degree of convexity can be tuned by a (convexity) parameter. A non-parametric (ICA) algorithm generated from the proposed divergence is developed exploiting convexity parameters and employing the Parzen window-based distribution estimates. The new contrast function results in effective parametric and non-parametric ICA-based computational algorithms. Moreover, two pairwise iterative schemes are proposed to tackle the high dimensionality of sources. These wo pairwise non-parametric ICA algorithms are based on the new high-performance Convex Cauchy-Schwarz Divergence (CCS-DIV). These two schemes enable fast and efficient de-mixing of sources in real-world applications where the dimensionality of the sources is higher than two.Secondly, the more challenging problem in communication signal processing is to estimate the source signals and their channels in the presence of other co-channel signals and noise without the use of a training set. Blind techniques are promising to integrate and optimize the wireless communication designs i.e. equalizers/ filters/ combiners through its potential in suppressing the inter-symbol interference (ISI), adjacent channel interference, co-channel and the multi access interference MAI. Therefore, a new blind detection algorithm, based on fourth order cumulant matrices, is presented and applied to the multi-user symbol estimation problem in Direct Sequence Code Division Multiple Access (DS-CDMA) systems. The blind detection is to estimate multiple symbol sequences in the downlink of a DS-CDMA communication system using only the received wireless data and without any knowledge of the user spreading codes. The proposed algorithm takes advantage of higher cumulant matrix properties to reduce the computational load and enhance performance. In addition, we address the problem of blind multiuser equalization in the wideband CDMA system, in the noisy multipath propagation environment. Herein, we propose three new blind receiver schemes, which are based on the state space structures. These so-called blind state-space receivers (BSSR) do not require knowledge of the propagation parameters or spreading code sequences of the users but relies on the statistical independence assumption among the source signals. We then develop and derive three update-laws in order to enhance the performance of the blind detector. Also, we upgrade three semi-blind adaptive detectors based on the incorporation of the RAKE receiver and the stochastic gradient algorithms which are used in several blind adaptive signal processing algorithms, namely FastICA, RobustICA, and principle component analysis PCA. Through simulation evidence, we verify the significant bit error rate (BER) and computational speed improvements achieved by these algorithms in comparison to other leading algorithms.Lastly, system realization of one of the developed algorithms has been explored among ASIC or FPGA platforms in terms of cost, effectiveness, and economics of scale. Based on our findings of current stat-of-the-art electronics, programmable FPGA designs are deemed to be the most effective technology to be used for ICA hardware implementation at this time.
Show less
- Title
- EFFECT OF GATE-OXIDE DEGRADATION ON ELECTRICAL PARAMETERS OF SILICON AND SILICON CARBIDE POWER MOSFETS
- Creator
- KARKI, UJJWAL
- Date
- 2019
- Collection
- Electronic Theses & Dissertations
- Description
-
The power MOSFET (Metal Oxide Semiconductor Field Effect Transistor) is recognized as a crucial component of many power-electronic systems. The physical structure of both Silicon and Silicon Carbide power MOSFETs require an oxide layer as a dielectric material between their gate terminal and the semiconductor surface. The gate-oxide material, which is predominantly silicon dioxide, slowly degrades under the presence of an electric field. Over time, the degradation process significantly alters...
Show moreThe power MOSFET (Metal Oxide Semiconductor Field Effect Transistor) is recognized as a crucial component of many power-electronic systems. The physical structure of both Silicon and Silicon Carbide power MOSFETs require an oxide layer as a dielectric material between their gate terminal and the semiconductor surface. The gate-oxide material, which is predominantly silicon dioxide, slowly degrades under the presence of an electric field. Over time, the degradation process significantly alters the electrical parameters of power MOSFETs, causing a negative impact on performance, reliability, and efficiency of power converters they are used in. In order to monitor this, the electrical parameters are utilized as precursors (or failure indicators) of gate-oxide degradation.Despite extensive investigation of gate-oxide degradation in Silicon (Si) power MOSFETs, the research literature has not attributed a consistent variation pattern to its gate-oxide degradation precursors. This dissertation investigates the variation pattern of existing precursors: a) threshold voltage, b) gate-plateau voltage, and c) on-resistance. While confirming the previously reported dip-and-rebound variation pattern of the threshold voltage and the gate-plateau voltage, a similar dip-and-rebound variation pattern is also identified in the on-resistance of Si power MOSFETs. Furthermore, a new online precursor of gate-oxide degradation— the gate-plateau time, is proposed and demonstrated to exhibit a similar dip-and-rebound variation pattern. The gate-plateau time is also shown to be the most sensitive online precursor for observing the rebound phenomenon. In addition, the analytical expressions are derived to correlate the effect of gate-oxide degradation with simultaneous dip-and-rebound variation pattern in all four precursors. The dip-and-rebound variation pattern is experimentally confirmed by inducing accelerated gate-oxide degradation in two different commercial Si power MOSFETs. While multiple electrical parameters have been identified as precursors for monitoring the gate-oxide degradation in Si MOSFETs, very few precursors have been proposed for Silicon Carbide (SiC) power MOSFETs. This dissertation proposes that in addition to the threshold voltage, the other online precursors identified for Si power MOSFETs: the gate-plateau voltage and the gate-plateau time, are also effective for monitoring the effect of gate-oxide degradation process in SiC power MOSFETs. Though the gate-oxide material is the same in both Si and SiC power MOSFETs, the effect of gate-oxide degradation on the variation pattern of electrical parameters is different. In contrast to the dip-and-rebound variation pattern of precursors in Si MOSFETs, the research literature has attributed a consistent linear-with-log-stress-time variation pattern to the threshold-voltage shift in SiC power MOSFETs. It is shown that both the gate-plateau voltage and the gate-plateau time increase in a linear-with-log-stress-time manner similar to the threshold voltage. The analytical expressions are derived to correlate the effect of gate-oxide degradation with simultaneous linear-with-log-stress-time variation pattern in all three online precursors. The increasing trend of precursors is experimentally confirmed by inducing accelerated gate-oxide degradation in both planar and trench-gate commercial SiC power MOSFETs under high voltage, high temperature, and hard-switching conditions.
Show less
- Title
- Microwave Imaging Using a Tunable Reflectarray Antenna and Superradiance in Open Quantum Systems
- Creator
- Tayebi, Amin
- Date
- 2017
- Collection
- Electronic Theses & Dissertations
- Description
-
Theory, experiment, and computation are the three paradigms for scientific discoveries. This dissertation includes work in all three areas. The first part is dedicated to the practical design and development of a microwave imaging system, a problem mostly experimental and computational in nature. The second part discusses theoretical foundations of possible future advances in quantum signal transmission. In part one, a new active microwave imaging system is proposed. At the heart of this...
Show moreTheory, experiment, and computation are the three paradigms for scientific discoveries. This dissertation includes work in all three areas. The first part is dedicated to the practical design and development of a microwave imaging system, a problem mostly experimental and computational in nature. The second part discusses theoretical foundations of possible future advances in quantum signal transmission. In part one, a new active microwave imaging system is proposed. At the heart of this novel system lies an electronically reconfigurable beam-scanning reflectarray antenna. The high tuning capability of the reflectarray provides a broad steering range of +\- 60 degrees in two distinct frequency bands: S and F bands. The array, combined with an external source, dynamically steers the incoming beam across this range in order to generate multi-angle projection data for target detection. The collected data is then used for image reconstruction by means of time reversal signal processing technique. Our design significantly reduces cost and operational complexities compared to traditional imaging systems. In conventional systems, the region of interest is enclosed by a costly array of transceiver antennas which additionally requires a complicated switching circuitry. The inclusion of the beam scanning array and the utilization of a single source, eliminates the need for multiple antennas and the involved circuitry. In addition, unlike conventional setups, this system is not constrained by the dimensions of the object under test. Therefore the inspection of large objects, such as extended laminate structures, composite airplane wings and wind turbine blades becomes possible. Experimental results of detection of various dielectric targets as well as detecting anomalies within them, such as defects and metallic impurities, using the imaging prototype are presented.The second part includes the theoretical consideration of three different problems: quantum transport through two different nanostructures, a solid state device suitable for quantum computing and spherical plasmonic nanoantennas and waveguides. These three physically different systems are all investigated within a single quantum theory; the effective non-Hermitian Hamiltonian framework. The non-Hermitian Hamiltonian approach is a convenient mathematical formalism for the description of open quantum systems. This method based on the Feshbach projection formalism provides an alternative to popular methodssuch as the Feynman diagrammatic techniques and the master equation approach that are commonly used for studying open quantum systems. It is formally exact but very flexible and can be adjusted to many specific situations. One bright phenomenon emerging in the situation with a sufficiently strong continuum coupling in the case when the number of open channels is relatively small compared to the number of involved intrinsic states is the so-called superradiance. Being an analog of superradiance in quantum optics, this term stands for the formation in the system of a collective superposition of the intrinsic states coherently coupled to the same decay channel. The footprint of superradiance in each system is investigated in detail. In the quantum transport problem, signal transmission is greatly enhanced at the transition to superradiance. In the proposed solid state based charge qubit, the superradiant states effectively protect the remaining internal states from decaying into the continuum and hence increase the lifetime of the device. Finally, the superradiance phenomenon provides us a tool to manipulate light at the nanoscale. It is responsible for the existence of modes with distinct radiation properties in a system of coupled plasmonic nanoantennas: superradiant states with enhanced and dark modes with extremely damped radiation. Furthermore, similar to the quantum case, energy transport through a plasmonic waveguide is greatly enhanced.
Show less
- Title
- Teaching electricity to freshman physical science students through constructivism
- Creator
- Van Horn, Jerry
- Date
- 2005
- Collection
- Electronic Theses & Dissertations
- Title
- MULTIPACTOR DISCHARGE WITH TWO-FREQUENCY RF FIELDS
- Creator
- Iqbal, Asif
- Date
- 2021
- Collection
- Electronic Theses & Dissertations
- Description
-
Multipactor is a nonlinear ac discharge in which a high frequency rf field creates an electron avalanche sustained through secondary electron emission from metallic or dielectric surfaces. Multipactor discharge can adversely affect various rf systems, such as telecommunications systems, high power electromagnetic sources, and accelerator structures. The restricted frequency spectrum and the cluttered satellite orbits require a single spacecraft to perform the same or enhanced functions which...
Show moreMultipactor is a nonlinear ac discharge in which a high frequency rf field creates an electron avalanche sustained through secondary electron emission from metallic or dielectric surfaces. Multipactor discharge can adversely affect various rf systems, such as telecommunications systems, high power electromagnetic sources, and accelerator structures. The restricted frequency spectrum and the cluttered satellite orbits require a single spacecraft to perform the same or enhanced functions which previously required several satellites. This necessitates complex multi-frequency operation for a much-enlarged orbital capacity and mission, where the requirement of high power rf payload significantly increases the threat of multipactor. This work provides a comprehensive understanding of multipactor discharge driven by two-frequency rf fields. The study provides important results on single and two-surface multipactor, including multipactor mitigation, migration of electron trajectory, and frequency domain analysis.We use Monte Carlo simulations and analytical calculations to obtain single surface multipactor susceptibility diagrams with two-frequency rf fields. We present a novel multiparticle Monte Carlo simulation model with adaptive time steps to investigate the time dependent physics of the single surface multipactor. The effects of the relative strength and phase of the second carrier mode as well as the frequency separation between the two carrier modes are studied. It is found that two-frequency operation can reduce the multipactor strength compared to single-frequency operation with the same total rf power. Migration of the multipactor trajectory is demonstrated for different configurations of the two-frequency rf fields. Formation of beat waves is observed in the temporal profiles of the surface charging electric field with small frequency separation between the two carrier modes. We study the amplitude spectrum of the surface charging field due to multipactor in the frequency domain. It is found that for the single-frequency rf operation, the normal electric field consists of pronounced even harmonics of the driving rf frequency. For two-frequency rf operation, spectral peaks are observed at various frequencies of intermodulation product of the rf carrier frequencies. Pronounced peaks are observed at the sum and difference frequencies of the carrier frequencies, at multiples of those frequencies, and at multiples of the carrier frequencies. We also study two surface multipactor with single- and two-frequency rf fields using Monte Carlo simulations and CST. The effects of the relative strength and phase of the second carrier mode, and the frequency separation between the two carrier modes on multipactor susceptibility are studied. Regions of single and mixed multipactor modes are observed in the susceptibility chart. The effect of space charge on multipactor susceptibility and the time dependent physics is also studied.
Show less
- Title
- Beam-Wave Interaction for a Terahertz Solid-state Amplifier
- Creator
- Hodek, Matthew Steven
- Date
- 2021
- Collection
- Electronic Theses & Dissertations
- Description
-
The push of conventional electronic amplifier technologies into the deep submillimeter wavelength and THz frequency ranges of the electromagnetic (EM) spectrum has been limited by constraints on their fundamental physics of operation and fabrication limitations. At the same time, optical amplifier technologies can only access this spectral region using inefficient frequency down-conversion. This struggle for practical power amplifiers in the THz band will likely require a new type of...
Show moreThe push of conventional electronic amplifier technologies into the deep submillimeter wavelength and THz frequency ranges of the electromagnetic (EM) spectrum has been limited by constraints on their fundamental physics of operation and fabrication limitations. At the same time, optical amplifier technologies can only access this spectral region using inefficient frequency down-conversion. This struggle for practical power amplifiers in the THz band will likely require a new type of amplifier and has led to a desire for a solid-state beam-wave style amplifier using semiconductor fabrication techniques. While there has been considerable progress in creating transistors in the THz region, the small size required to achieve the needed transit times and gate capacitances generally precludes them from producing power above 1 mW. Vacuum electronic devices (VEDs), such as traveling wave amplifiers (TWAs), have also shown great progress into this band. A TWA is an example of a beam-wave style device where gain is achieved by transferring energy from an electron beam to an EM wave at electrically large length scales. However, as traditional TWAs are scaled to higher frequencies, the shrinking wavelength makes fabrication of the corresponding interaction circuit structures and miniscule beam tunnels increasingly difficult through micro-machining or other subtractive metal shaping. Thus, combining the strengths of both these systems into a single device has some merit. Solid-state TWAs have been attempted over many years without success largely due to slow electron drift velocities resulting in beam equivalents that are unsuitable for synchronization with EM slow-wave structures. One possible path towards a beam-wave style THz solid-state amplifier is to couple to a plasma wave characterized by phase propagation much faster than the electron velocity limited by scattering in a material, but this requires a substantial redevelopment of the fundamental beam-wave interaction analysis. Presented here is a novel analysis built upon the prior work on solid-state and VED TWAs with a primary difference in the nature of the charge carrier behavior. In this work the electron beam, which was previously described as bulk carriers in a semiconductor, is now formed with an un-gated 2D electron gas (2DEG). A freely propagating plasma wave is present in the dense 2DEG and takes the place of the typical space charge wave present in VED devices. Example calculations are compared to a generic VED TWA behavior and the basic performance of a realizable device is analyzed through the use of a gallium nitride heterostructure material system and achievable fabrication strategies. It is shown that the concept of a TWA using a 2DEG plasma wave is not practical at best, and fundamentally flawed at worst. However, the understanding gained lays some of the groundwork for other possible beam-wave interaction style amplifiers using a fast 2DEG plasma wave.
Show less
- Title
- Safe Control Design for Uncertain Systems
- Creator
- Marvi, Zahra
- Date
- 2021
- Collection
- Electronic Theses & Dissertations
- Description
-
This dissertation investigates the problem of safe control design for systems under model and environmental uncertainty. Reinforcement learning (RL) provides an interactive learning framework in which the optimal controller is sequentially derived based on instantaneous reward. Although powerful, safety consideration is a barrier to the wide deployment of RL algorithms in practice. To overcome this problem, we proposed an iterative safe off-policy RL algorithm. The cost function that encodes...
Show moreThis dissertation investigates the problem of safe control design for systems under model and environmental uncertainty. Reinforcement learning (RL) provides an interactive learning framework in which the optimal controller is sequentially derived based on instantaneous reward. Although powerful, safety consideration is a barrier to the wide deployment of RL algorithms in practice. To overcome this problem, we proposed an iterative safe off-policy RL algorithm. The cost function that encodes the designer's objectives is augmented with a control barrier function (CBF) to ensure safety and optimality. The proposed formulation provides a look-ahead and proactive safety planning, in which the safety is planned and optimized along with the performance to minimize the intervention with the optimal controller. Extensive safety and stability analysis is provided and the proposed method is implemented using the off-policy algorithm without requiring complete knowledge about the system dynamics. This line of research is then further extended to have a safety and stability guarantee even during the data collection and exploration phases in which random noisy inputs are applied to the system. However, satisfying the safety of actions when little is known about the system dynamics is a daunting challenge. We present a novel RL scheme that ensures the safety and stability of the linear systems during the exploration and exploitation phases. This is obtained by having a concurrent model learning and control, in which an efficient learning scheme is employed to prescribe the learning behavior. This characteristic is then employed to apply only safe and stabilizing controllers to the system. First, the prescribed errors are employed in a novel adaptive robustified control barrier function (AR-CBF) which guarantees that the states of the system remain in the safe set even when the learning is incomplete. Therefore, the noisy input in the exploratory data collection phase and the optimal controller in the exploitation phase are minimally altered such that the AR-CBF criterion is satisfied and, therefore, safety is guaranteed in both phases. It is shown that under the proposed prescribed RL framework, the model learning error is a vanishing perturbation to the original system. Therefore, a stability guarantee is also provided even in the exploration when noisy random inputs are applied to the system. A learning-enabled barrier-certified safe controllers for systems that operate in a shared and uncertain environment is then presented. A safety-aware loss function is defined and minimized to learn the uncertain and unknown behavior of external agents that affect the safety of the system. The loss function is defined based on safe set error, instead of the system model error, and is minimized for both current samples as well as past samples stored in the memory to assure a fast and generalizable learning algorithm for approximating the safe set. The proposed model learning and CBF are then integrated together to form a learning-enabled zeroing CBF (L-ZCBF), which employs the approximated trajectory information of the external agents provided by the learned model but shrinks the safety boundary in case of an imminent safety violation using instantaneous sensory observations. It is shown that the proposed L-ZCBF assures the safety guarantees during learning and even in the face of inaccurate or simplified approximation of external agents, which is crucial in highly interactive environments. Finally, the cooperative capability of agents in a multi-agent environment is investigated for the sake of safety guarantee. CBFs and information-gap theory are integrated to have robust safe controllers for multi-agent systems with different levels of measurement accuracy. A cooperative framework for the construction of CBFs for every two agents is employed to maximize the horizon of uncertainty under which the safety of the overall system is satisfied. The information-gap theory is leveraged to determine the contribution and share of each agent in the construction of CBFs. This results in the highest possible robustness against measurement uncertainty. By employing the proposed approach in constructing CBF, a higher horizon of uncertainty can be safely tolerated and even the failure of one agent in gathering accurate local data can be compensated by cooperation between agents. The effectiveness of the proposed methods is extensively examined in simulation results.
Show less
- Title
- EXTENDED REALITY (XR) & GAMIFICATION IN THE CONTEXT OF THE INTERNET OF THINGS (IOT) AND ARTIFICIAL INTELLIGENCE (AI)
- Creator
- Pappas, Georgios
- Date
- 2021
- Collection
- Electronic Theses & Dissertations
- Description
-
The present research develops a holistic framework for and way of thinking about Deep Technologies related to Gamification, eXtended Reality (XR), the Internet of Things (IoT), and Artificial Intelligence (AI). Starting with the concept of gamification and the immersive technology of XR, we create interconnections with the IoT and AI implementations. While each constituent technology has its own unique impact, our approach uniquely addresses the combinational potential of these technologies...
Show moreThe present research develops a holistic framework for and way of thinking about Deep Technologies related to Gamification, eXtended Reality (XR), the Internet of Things (IoT), and Artificial Intelligence (AI). Starting with the concept of gamification and the immersive technology of XR, we create interconnections with the IoT and AI implementations. While each constituent technology has its own unique impact, our approach uniquely addresses the combinational potential of these technologies that may have greater impact than any technology on its own. To approach the research problem more efficiently, the methodology followed includes its initial division into smaller parts. For each part of the research problem, novel applications were designed and developed including gamified tools, serious games and AR/VR implementations. We apply the proposed framework in two different domains: autonomous vehicles (AVs), and distance learning.Specifically, in chapter 2, an innovative hybrid tool for distance learning is showcased where, among others, the fusion with IoT provides a novel pseudomultiplayer mode. This mode may transform advanced asynchronous gamified tools to synchronous by enabling or disabling virtual events and phenomena enhancing the student experience. Next, in Chapter 3, along with gamification, the combination of XR with IoT data streams is presented but this time in an automotive context. We showcase how this fusion of technologies provides low-latency monitoring of vehicle characteristics, and how this can be visualized in augmented and virtual reality using low-cost hardware and services. This part of our proposed framework provides the methodology of creating any type of Digital Twin with near real-time data visualization.Following that, in chapter 4 we establish the second part of the suggested holistic framework where Virtual Environments (VEs), in general, can work as synthetic data generators and thus, be a great source of artificial suitable for training AI models. This part of the research includes two novel implementations the Gamified Digital Simulator (GDS) and the Virtual LiDAR Simulator.Having established the holistic framework, in Chapter 5, we now “zoom in” to gamification exploring deeper aspects of virtual environments and discuss how serious games can be combined with other facets of virtual layers (cyber ranges,virtual learning environments) to provide enhanced training and advanced learning experiences. Lastly, in chapter 6, “zooming out” from gamification an additional enhancement layer is presented. We showcase the importance of human-centered design of via an implementation that tries to simulate the AV-pedestrian interactions in a virtual and safe environment.
Show less
- Title
- 3d-printed lightweight wearable microsystems with highly conductive interconnects
- Creator
- Alforidi, Ahmad Fudy
- Date
- 2019
- Collection
- Electronic Theses & Dissertations
- Description
-
There is great demand for mass production of electronics in wide range of applications including, but not limited to, ubiquitous and lightweight wearable devices for the development of smart homes and health monitoring systems. The advancement of additive manufacturing in electronics industry and academia shows a potential replacement of conventional electronics fabrication methods. However, conductivity is the most difficult issue towards the implementation of highperformance 3D-printed...
Show moreThere is great demand for mass production of electronics in wide range of applications including, but not limited to, ubiquitous and lightweight wearable devices for the development of smart homes and health monitoring systems. The advancement of additive manufacturing in electronics industry and academia shows a potential replacement of conventional electronics fabrication methods. However, conductivity is the most difficult issue towards the implementation of highperformance 3D-printed microsystems. As most of 3D printing electronics utilizes ink-based conductive material for electrical connection, it requires high curing temperature for achieving low resistivity (150 °C for obtaining nearly 2.069 x 10-6 .m in copper connects), which is not suitable for most of 3D printing filaments. Thisseriously limits the availability of many lightweight 3D printable materials in microsystem applications because these materials usually have relatively low glass-transition temperatures (< 120 °C). Considering that pristine copper films thicker than 49 nm can offer a very low bulk resistivity of 1.67 x 10-8 .m, a new 3D-printing-compatible connection fabrication approach capable of depositing pristine copper structures with no need of curing processes is highly desirable. Therefore, a new technology with the ability to manufacture 3D-printed structures with high performance electronics is necessary.In this dissertation, novel 3D-printed metallization processes for multilayer microsystems made of lightweight material on planar and non-planar surfaces are presented. The incorporation of metal interconnects in the process is accomplished through evaporating, sputtering and electroplating techniques. This approach involves the following critical processes with unique features: a) patterning of metal interconnects using self-aligned 3D-printed shadow masks, b) fabrication of the temporary connections between isolated metal segments by 3D printing followed with metallization, which host the subsequent electroplating process, and c) fabrication of vertical interconnect access (VIA) features by 3D printing followed with metallization, which enable electrical connections between multilayers of the Microsystem for miniaturization.The presented technique offers approximate bulk resistivity with no curing temperature needed after deposition. Since the ultimate goal is developing lightweight wearable microsystem, this approach demonstrated for two layers and can easily extended for multilayer microsystems enabling realization and miniaturization of complex systems. In addition, the variety of filaments used in 3Dprinters provide opportunities to study implementation of these processes in many electronics fields including flexible electronics. Therefore, the integration of physical vapor deposition systems with 3D printing machines is very promising for the future industry of 3D-printed microsystems.
Show less
- Title
- Millimeter-wave microsystems using additive manufacturing process
- Creator
- Qayyum, Jubaid Abdul
- Date
- 2020
- Collection
- Electronic Theses & Dissertations
- Description
-
In recent years, researchers have been working to explore the millimeter-wave frequency domain for wireless technology to cope with the immense demand for high bandwidth for faster wireless applications such as communication and remote sensing in general. In wireless communication technology, high frequency of the mm-wave systems offers high bandwidth transmission for faster data transmission. The mm-wave frequency has also been approved by FCC for commercial applications like 5G...
Show moreIn recent years, researchers have been working to explore the millimeter-wave frequency domain for wireless technology to cope with the immense demand for high bandwidth for faster wireless applications such as communication and remote sensing in general. In wireless communication technology, high frequency of the mm-wave systems offers high bandwidth transmission for faster data transmission. The mm-wave frequency has also been approved by FCC for commercial applications like 5G communications that will deliver a more reliable, dependable and scalable cellular technology with high rate and low latency for the network users. It also promises to facilitate high data communication among devices and humans as well as other devices, the phenomena that gave rise to an emerging field known as the "Internet-of-Things". For remote sensing, higher frequencies of the mm-wave offer higher spatial and range resolution that can enable more intelligent sensor technologies.The fabrication and manufacturing process of mm-wave systems become increasingly difficult and expensive due to size reduction at smaller wavelengths. To overcome these problems, system on package (SoP) technology has gained a lot of attention. The SoP approach combines multiple integrated circuits and passive components using different packaging and interconnect approaches into a miniaturized micro-system module. Additive manufacturing (AM), also colloquially known as 3-D printing, is considered as a promising method for packaging in SoP solutions because it enables rapid prototyping and large-scale production at an affordable cost and minimal environmental impact.This work primarily focuses on the development of mm-wave microsystems by integrating chips with AM process using aerosol jet printing (AJP). Several mm-wave transceiver components that ranges from Ka-band to W-band are designed and realized in a state-of-the-art silicon-germanium IC foundry process, and are characterized to be used in complete transceiver system using 3-D printing packaging. These include a 28-60 GHz Single-Pole Double-Throw (SPDT) switch, 28-60 GHz Low-noise amplifier (LNA), 15-100 GHz downconverting mixer, K-Band upconverting mixer, V-band upconverting mixer, and a 90 GHz MMIC frequency tripler.The feasibility of using AJP in mm-wave regime and the ink characteristics were also studied. For any AM process to be an all-in-one packaging solution, it should have the capability of realizing conducting as well as dielectric materials. Silver and polyimide inks were used in this work to demonstrate a chip-to-chip interconnection and a comparison with the traditional packaging technique is also discussed. An ultra-wideband interconnect from 0.1-110 GHz was implemented using AJP. The conductivity of the silver ink and its viability to be used in flexible electronics was also considered.
Show less
- Title
- Novel simulation and data processing algorithms for eddy current inspection
- Creator
- Efremov, Anton
- Date
- 2020
- Collection
- Electronic Theses & Dissertations
- Description
-
Eddy Current Testing (ECT) is a widely used technique in the area of Nondestructive Evaluation. It offers a cheap, fast, non-contact way for finding surface and subsurface defects in a conductive material. Due to development of new designs of eddy current probe coils and advance of model based solutions to inverse problems in ECT, there is an emerging need for fast and accurate numerical methods for efficient modeling and processing of the data. This work contributes to the two directions of...
Show moreEddy Current Testing (ECT) is a widely used technique in the area of Nondestructive Evaluation. It offers a cheap, fast, non-contact way for finding surface and subsurface defects in a conductive material. Due to development of new designs of eddy current probe coils and advance of model based solutions to inverse problems in ECT, there is an emerging need for fast and accurate numerical methods for efficient modeling and processing of the data. This work contributes to the two directions of computational ECT: eddy current inspection simulation ("forward problem") and analysis of the measured data for automated defect detection ("inverse problem").A new approach to simulate low-frequency electromagnetics in 3D is presented, based on a combination of a frequency-domain reduced vector potential formulation with a boundary condition based on Dirichlet-to-Neumann operator. The equations are solved via a Finite Element Method (FEM), and a novel technique for the fast solution of the related linear system is proposed. The performance of the method is analyzed for a few representative ECT problems. The obtained numerical results are validated against analytic solutions, other simulation codes, and experimental data.The inverse problem of interpreting measured ECT data is also a significant challenge in many practical applications. Very often, the defect indication in a measurement is very subtle due to the large contribution from the geometry of the test sample, making defect detection very difficult. This thesis presents a novel approach to address this problem. The developed algorithm is applied to real problems of detecting defects under steel fasteners in aircraft geometry using 2D data obtained from a raster scan of a multilayer structure with a low frequency eddy current excitation and GMR (Giant Magnetoresistive) sensors. The algorithm is also applied to the data obtained from EC inspection of heat exchange tubes in nuclear power plant.
Show less
- Title
- Thermal design studies in niobium and helium for superconducting radio frequency cavities
- Creator
- Aizaz, Ahmad
- Date
- 2006
- Collection
- Electronic Theses & Dissertations
- Title
- REACTIVE ION ENHANCED MAGNETRON SPUTTERING OF NITRIDE THIN FILMS
- Creator
- Talukder, Al-Ahsan
- Date
- 2022
- Collection
- Electronic Theses & Dissertations
- Description
-
Magnetron sputtering is a popular vacuum plasma coating technique used for depositing metals, dielectrics, semiconductors, alloys, and compounds onto a wide range of substrates. In this work, we present two popular types of magnetron sputtering, i.e., pulsed DC and RF magnetron sputtering, for depositing piezoelectric aluminum nitride (AlN) thin films with high Young’s modulus. The effects of important process parameters on the plasma I-V characteristics, deposition rate, and the properties...
Show moreMagnetron sputtering is a popular vacuum plasma coating technique used for depositing metals, dielectrics, semiconductors, alloys, and compounds onto a wide range of substrates. In this work, we present two popular types of magnetron sputtering, i.e., pulsed DC and RF magnetron sputtering, for depositing piezoelectric aluminum nitride (AlN) thin films with high Young’s modulus. The effects of important process parameters on the plasma I-V characteristics, deposition rate, and the properties of the deposited AlN films, are studied comprehensively. The effects of these process parameters on Young’s modulus of the deposited films are also presented. Scanning electron microscope imaging revealed a c-axis oriented columnar growth of AlN. Performance of surface acoustic devices, utilizing the AlN films deposited by magnetron sputtering, are also presented, which confirms the differences in qualities and microstructures of the pulsed DC and RF sputtered films. The RF sputtered AlN films showed a denser microstructure with smaller grains and a smoother surface than the pulsed DC sputtered films. However, the deposition rate of RF sputtering is about half of the pulsed DC sputtering process. We also present a novel ion source enhanced pulsed DC magnetron sputtering for depositing high-quality nitrogen-doped zinc telluride (ZnTe:N) thin films. This ion source enhanced magnetron sputtering provides an increased deposition rate, efficient N-doping, and improved electrical, structural, and optical properties than the traditional magnetron sputtering. Ion source enhanced deposition leads to ZnTe:N films with smaller lattice spacing and wider X-ray diffraction peak, which indicates denser films with smaller crystallites embedded in an amorphous matrix.
Show less
- Title
- Design and Analysis of Sculpted Rotor Interior Permanent Magnet Machines
- Creator
- Hayslett, Steven Lee
- Date
- 2022
- Collection
- Electronic Theses & Dissertations
- Description
-
Design of interior permanent magnet electrical machines is complex. Interior permanent magnet machines offer a good balance of cost, efficiency, and torque/power density. Maximum torque and power production of an interior permanent magnet machine is achieved through balancing design choices related to the permanent magnet and salient features. The embedded magnet within the salient structure of the rotor lamination results in an increase in harmonic content. In addition, interaction of the...
Show moreDesign of interior permanent magnet electrical machines is complex. Interior permanent magnet machines offer a good balance of cost, efficiency, and torque/power density. Maximum torque and power production of an interior permanent magnet machine is achieved through balancing design choices related to the permanent magnet and salient features. The embedded magnet within the salient structure of the rotor lamination results in an increase in harmonic content. In addition, interaction of the armature, control angle, and rotor reluctance structure creates additional harmonic content. These harmonics result in increased torque ripple, radial forces, losses, and other unwanted phenomena. Further improvements in torque and power density, and techniques to minimize harmonics, are necessary. Typical interior permanent magnet machine design results at the maximum torque per amp condition are at neither the maximum magnet nor maximum salient torque, but at the best combination of the two. The use of rotor surface features to align the magnet and the reluctance axis allows for improvement of torque and power density. Reduction of flux and torque harmonics is also possible through careful design of rotor sculpt features that are included at or near the surface of the rotor. Finite element models provide high fidelity and accurate results to machine performance but do not give insight into the relationship between design parameters and performance. Winding factor models describe the machine with a set of Fourier series equations, providing access to the harmonic information of both parameters and performance. Direct knowledge of this information provides better insight, a clear understanding of interactions, and the ability to develop a more efficient design process. A new analytical winding function model of the single-V IPM machine is introduced, which considers the sculpted rotor and how this model can be used in the design approach of machines.Rotor feature trends are established and utilized to increase design intuition and reduce dependency upon the lengthy design of experiment optimization processes. The shape and placement of the rotor features, derived from the optimization process, show the improvement in torque average and torque ripple of the IPM machine.
Show less
- Title
- TENSOR LEARNING WITH STRUCTURE, GEOMETRY AND MULTI-MODALITY
- Creator
- Sofuoglu, Seyyid Emre
- Date
- 2022
- Collection
- Electronic Theses & Dissertations
- Description
-
With the advances in sensing and data acquisition technology, it is now possible to collect datafrom different modalities and sources simultaneously. Most of these data are multi-dimensional in nature and can be represented by multiway arrays known as tensors. For instance, a color image is a third-order tensor defined by two indices for spatial variables and one index for color mode. Some other examples include color video, medical imaging such as EEG and fMRI, spatiotemporal data...
Show moreWith the advances in sensing and data acquisition technology, it is now possible to collect datafrom different modalities and sources simultaneously. Most of these data are multi-dimensional in nature and can be represented by multiway arrays known as tensors. For instance, a color image is a third-order tensor defined by two indices for spatial variables and one index for color mode. Some other examples include color video, medical imaging such as EEG and fMRI, spatiotemporal data encountered in urban traffic monitoring, etc.In the past two decades, tensors have become ubiquitous in signal processing, statistics andcomputer science. Traditional unsupervised and supervised learning methods developed for one- dimensional signals do not translate well to higher order data structures as they get computationally prohibitive with increasing dimensionalities. Vectorizing high dimensional inputs creates problems in nearly all machine learning tasks due to exponentially increasing dimensionality, distortion of data structure and the difficulty of obtaining sufficiently large training sample size.In this thesis, we develop tensor-based approaches to various machine learning tasks. Existingtensor based unsupervised and supervised learning algorithms extend many well-known algorithms, e.g. 2-D component analysis, support vector machines and linear discriminant analysis, with better performance and lower computational and memory costs. Most of these methods rely on Tucker decomposition which has exponential storage complexity requirements; CANDECOMP-PARAFAC (CP) based methods which might not have a solution; or Tensor Train (TT) based solutions which suffer from exponentially increasing ranks. Many tensor based methods have quadratic (w.r.t the size of data), or higher computational complexity, and similarly, high memory complexity. Moreover, existing tensor based methods are not always designed with the particular structure of the data in mind. Many of the existing methods use purely algebraic measures as their objective which might not capture the local relations within data. Thus, there is a necessity to develop new models with better computational and memory efficiency, with the particular structure of the data and problem in mind. Finally, as tensors represent the data with more faithfulness to the original structure compared to the vectorization, they also allow coupling of heterogeneous data sources where the underlying physical relationship is known. Still, most of the current work on coupled tensor decompositions does not explore supervised problems.In order to address the issues around computational and storage complexity of tensor basedmachine learning, in Chapter 2, we propose a new tensor train decomposition structure, which is a hybrid between Tucker and Tensor Train decompositions. The proposed structure is used to imple- ment Tensor Train based supervised and unsupervised learning frameworks: linear discriminant analysis (LDA) and graph regularized subspace learning. The algorithm is designed to solve ex- tremal eigenvalue-eigenvector pair computation problems, which can be generalized to many other methods. The supervised framework, Tensor Train Discriminant Analysis (TTDA), is evaluated in a classification task with varying storage complexities with respect to classification accuracy and training time on four different datasets. The unsupervised approach, Graph Regularized TT, is evaluated on a clustering task with respect to clustering quality and training time on various storage complexities. Both frameworks are compared to discriminant analysis algorithms with similar objectives based on Tucker and TT decompositions.In Chapter 3, we present an unsupervised anomaly detection algorithm for spatiotemporaltensor data. The algorithm models the anomaly detection problem as a low-rank plus sparse tensor decomposition problem, where the normal activity is assumed to be low-rank and the anomalies are assumed to be sparse and temporally continuous. We present an extension of this algorithm, where we utilize a graph regularization term in our objective function to preserve the underlying geometry of the original data. Finally, we propose a computationally efficient implementation of this framework by approximating the nuclear norm using graph total variation minimization. The proposed approach is evaluated for both simulated data with varying levels of anomaly strength, length and number of missing entries in the observed tensor as well as urban traffic data. In Chapter 4, we propose a geometric tensor learning framework using product graph structures for tensor completion problem. Instead of purely algebraic measures such as rank, we use graph smoothness constraints that utilize geometric or topological relations within data. We prove the equivalence of a Cartesian graph structure to TT-based graph structure under some conditions. We show empirically, that introducing such relaxations due to the conditions do not deteriorate the recovery performance. We also outline a fully geometric learning method on product graphs for data completion.In Chapter 5, we introduce a supervised learning method for heterogeneous data sources suchas simultaneous EEG and fMRI. The proposed two-stage method first extracts features taking the coupling across modalities into account and then introduces kernelized support tensor machines for classification. We illustrate the advantages of the proposed method on simulated and real classification tasks with small number of training data with high dimensionality.
Show less
- Title
- Wireless Phase and Frequency Synchronization for Distributed Phased Arrays
- Creator
- Mghabghab, Serge R.
- Date
- 2022
- Collection
- Electronic Theses & Dissertations
- Description
-
Distributed microwave wireless systems have the potential to dramatically reshape wireless technologies due to their ability to provide improvements in robustness, transmit power, antenna gain, spatial and temporal resolutions, size, scalability, secrecy, flexibility, and cost compared to single-platform wireless systems. Traditional wireless systems use a platform-centric model, where improving capabilities generally necessitates hardware retrofitting, which in many cases can result in a...
Show moreDistributed microwave wireless systems have the potential to dramatically reshape wireless technologies due to their ability to provide improvements in robustness, transmit power, antenna gain, spatial and temporal resolutions, size, scalability, secrecy, flexibility, and cost compared to single-platform wireless systems. Traditional wireless systems use a platform-centric model, where improving capabilities generally necessitates hardware retrofitting, which in many cases can result in a bulky, expensive, and inefficient system. Nevertheless, distributed microwave wireless systems require precise coordination to enable cooperative operation. The most highly synchronized systems coordinate at the wavelength level, supporting coherent distributed operations like beamforming. The electric states that need to be synchronized in coherent distributed arrays are mainly the phase, frequency, and time; the synchronization can be accomplished using multiple architectures. All coordination architectures can be grouped under two categories: open loop and closed loop. While closed-loop systems use feedback from the destination, open-loop coherent distributed arrays must synchronize their electrical states by only relying on synchronization signals stemming from within the array rather than depending on feedback signals from the target. Although harder to implement, open-loop coherent arrays enable sensing and other delicate communications applications, where feedback from the target is not possible.In this thesis, I focus on phase alignment and frequency synchronization for open-loop coherent distributed antenna arrays. Once the phase and frequency of all the nodes in the array are synchronized, it is possible to coherently beamform continuous wave signals. When information is modulated on the transmitted continuous waves, time alignment between the nodes is needed. However, time alignment is generally less stringent to implement since its requirements are dependent on the information rate rather than the beamforming frequency, such as for phase and frequency synchronization. Beamforming at 1.5 GHz is demonstrated in this thesis using a two-node open-loop distributed array. For the presented architecture, the phases of the transmitting nodes are aligned using synchronization signals incoming from within the array, without any feedback from the destination. A centralized phase alignment approach is demonstrated, where the secondary node(s) minimize their relative phase offsets to that of the primary node by locating the primary node and estimating the phase shift imparted by the relative motion of the nodes. A high accuracy two-tone waveform is used to track the primary node using a cooperative approach. This waveform is tested with an adaptive architecture to overcome the performance degradation due to weather conditions and to allow high ranging accuracy with minimal spectral footprint. Wireless frequency synchronization is implemented using a centralized approach that allows phase tracking, such that the frequencies of the secondary nodes are locked to that of the primary node. Once the phase and frequency of all the nodes are synchronized, it is possible to coherently beamform in the far field as long as the synchronization is achieved with the desired accuracy. I evaluate the required localization accuracies and frequency synchronization intervals. More importantly, I demonstrate experimentally the first two-node open-loop distributed beamforming at 1.5 GHz with multiple scenarios where the nodes are in relative motion, showing the ability to coherently beamform in a dynamic array where no feedback from the destination is needed.
Show less
- Title
- ASSESSMENT OF CROSS-FREQUENCY PHASE-AMPLITUDE COUPLING IN NEURONAL OSCILLATIONS
- Creator
- Munia, Tamanna Tabassum Khan
- Date
- 2021
- Collection
- Electronic Theses & Dissertations
- Description
-
Oscillatory activity in the brain has been associated with a wide variety of cognitive processes including decision making, feedback processing, and working memory control. The high temporal resolution provided by electroencephalography (EEG) enables the study of variation of oscillatory power and coupling across time. Various forms of neural synchrony across frequency bands have been suggested as the mechanism underlying neural binding. Recently, a considerable amount of work has focused on...
Show moreOscillatory activity in the brain has been associated with a wide variety of cognitive processes including decision making, feedback processing, and working memory control. The high temporal resolution provided by electroencephalography (EEG) enables the study of variation of oscillatory power and coupling across time. Various forms of neural synchrony across frequency bands have been suggested as the mechanism underlying neural binding. Recently, a considerable amount of work has focused on phase-amplitude coupling (PAC)– a form of cross-frequency coupling where the amplitude of a high-frequency signal is modulated by the phase of low-frequency oscillations.The existing methods for assessing PAC have certain limitations which can influence the final PAC estimates and the subsequent neuroscientific findings. These limitations include low frequency resolution, narrowband assumption, and inherent requirement of bandpass filtering. These methods are also limited to quantifying univariate PAC and cannot capture inter-areal cross frequency coupling between different brain regions. Given the availability of multi-channel recordings, a multivariate analysis of phase-amplitude coupling is needed to accurately quantify the coupling across multiple frequencies and brain regions. Moreover, the existing PAC measures are usually stationary in nature, focusing on phase-amplitude modulations within a particular time window or over arbitrary sliding short time windows. Therefore, there is a need for computationally efficient measures that can quantify PAC with a high-frequency resolution, track the variation of PAC with time, both in bivariate and multivariate settings and provide a better insight into the spatially distributed dynamic brain networks across different frequency bands.In this thesis, we introduce a PAC computation technique that aims to overcome some of these drawbacks and extend it to multi-channel settings for quantifying dynamic cross-frequency coupling in the brain. The main contributions of the thesis are threefold. First, we present a novel time frequency based PAC (t-f PAC) measure based on a high-resolution complex time-frequency distribution, known as the Reduced Interference Distribution (RID)-Rihaczek. This t-f PAC measure overcomes the drawbacks associated with filtering by extracting instantaneous phase and amplitude components directly from the t-f distribution and thus provides high resolution PAC estimates. Following the introduction of a complex time-frequency-based high resolution PAC measure, we extend this measure to multi-channel settings to quantify the inter-areal PAC across multiple frequency bands and brain regions. We propose a tensor-based representation of multi-channel PAC based on Higher Order Robust PCA (HoRPCA). The proposed method can identify the significantly coupled brain regions along with the frequency bands that are involved in the observed couplings while accurately discarding the non-significant or spurious couplings. Finally, we introduce a matching pursuit based dynamic PAC (MP-dPAC) measure that allows us to compute PAC from time and frequency localized atoms that best describe the signal and thus capture the temporal variation of PAC using a data-driven approach. We evaluate the performance of the proposed methods on both synthesized and real EEG data collected during a cognitive control-related error processing study. Based on our results, we posit that the proposed multivariate and dynamic PAC measures provide a better insight into understanding the spatial, spectral, and temporal dynamics of cross-frequency phase-amplitude coupling in the brain.
Show less