You are here
Search results
(61 - 80 of 87)
Pages
- Title
- Nonlinear Control of Robotic Fish
- Creator
- Castaño, Maria L.
- Date
- 2021
- Collection
- Electronic Theses & Dissertations
- Description
-
In the past few decades, robots that propel and maneuver themselves like fish, known as robotic fish, have received substantial attention due to their efficiency, maneuverability, and lifelike features. Their agile locomotion can be partially attributed to their bio-inspired propulsion methods, which range from tail (caudal) and dorsal to paired pectoral fins. While these characteristics make robotic fish an attractive choice for a myriad of aquatic applications, their highly nonlinear, often...
Show moreIn the past few decades, robots that propel and maneuver themselves like fish, known as robotic fish, have received substantial attention due to their efficiency, maneuverability, and lifelike features. Their agile locomotion can be partially attributed to their bio-inspired propulsion methods, which range from tail (caudal) and dorsal to paired pectoral fins. While these characteristics make robotic fish an attractive choice for a myriad of aquatic applications, their highly nonlinear, often under-actuated dynamics and actuator constraints present significant challenges in control design. The goal of this dissertation is to develop systematic model-based control approaches that guarantee closed-loop system stability, accommodate input constraints, and are computationally viable for robotic fish.We first propose a nonlinear model predictive control (NMPC) approach for path-following of a tail-actuated robotic fish, where the control design is based on an averaged dynamic model. The bias and the amplitude of the tail oscillation are treated as physical variables to be manipulated and are related to the control inputs via a nonlinear map. A control projection method is introduced to accommodate the inputs constraints while minimizing the optimization complexity in solving the NMPC problem. Both simulation and experimental results on a tail-actuated robotic fish support the efficacy of the proposed approach and its advantages over alternative approaches. Although NMPC is a promising candidate for tracking control, its computational complexity poses significant challenges in its implementation on resource-constrained robotic fish. We thus propose a backstepping-based trajectory tracking control scheme that is computationally inexpensive and guarantees closed-loop stability. We demonstrate how the control scheme can be synthesized to handle input constraints and establish via singular perturbation analysis the ultimate boundedness of three tracking errors (2D-position and orientation) despite the under-actuated nature of the robot. The effectiveness of this approach is supported by both simulation and experimental results on a tail-actuated robotic fish. We then turn our attention to pectoral fin-actuated robotic fish. Despite its benefits in achieving agile maneuvering at low swimming speeds, the range constraint of pectoral fin movement presents challenges in control. To overcome these challenges, we propose two different backstepping-based control approaches to achieve trajectory tracking and quick-maneuvering control, respectively. We first propose a scaling-based approach to develop a control-affine nonlinear dynamic average model for a pectoral fin-actuated robotic fish, which is validated via both simulation and experiments. The utility of the developed average dynamic model is then demonstrated via the synthesis of a dual-loop backstepping-based trajectory tracking controller. Cyclic actuation can often limit precise manipulation of the fin movements and the full exploitation of the maneuverability of pectoral fin-actuated robotic fish. To achieve quick velocity maneuvering control, we propose a dual-loop control approach composed of a backstepping-based controller in the outer loop and a fin movement-planning algorithm in the inner loop. Simulation results are presented to demonstrate the performance of the proposed scheme via comparison with a nonlinear model predictive controller.
Show less
- Title
- Nonlinear Extensions to New Causality and a NARMAX Model Selection Algorithm for Causality Analysis
- Creator
- da Cunha Nariyoshi, Pedro
- Date
- 2021
- Collection
- Electronic Theses & Dissertations
- Description
-
Although the concept of causality is intuitive, an universally accepted objective measure to quantify causal relationships does not exist. In complex systems where the internal mechanism is not well understood, it is helpful to estimate how different parts of the system are related. In the context of time-series data, Granger Causality (GC) has long been used as a way to quantify such relationships, having been successfully been applied in fields as diverse as econometrics and neurology....
Show moreAlthough the concept of causality is intuitive, an universally accepted objective measure to quantify causal relationships does not exist. In complex systems where the internal mechanism is not well understood, it is helpful to estimate how different parts of the system are related. In the context of time-series data, Granger Causality (GC) has long been used as a way to quantify such relationships, having been successfully been applied in fields as diverse as econometrics and neurology. Multiple Granger-like and extensions to GC have also been proposed. A recent measure developed to address limitations of GC, New Causality (NC), offers several advantages over GC, such as normalization and better proportionality with respect to internal mechanisms. However, NC is limited in scope by its seminal definition being based on parametric linear models. In this work, a critical analysis of NC is presented, NC is extended to a wide range of nonlinear models and finally, enhancements to a method of estimating nonlinear models for use with NC are reported.A critical analysis is conducted to study the relationship between NC values and model estimation errors. It is shown that NC is much more sensitive to overfitting in comparison to GC. Although the variance of NC estimates is reduced by applying regularization techniques, NC estimates are also prone to bias. In this work, diverse case-studies are presented showing the behavior of NC estimation in the presence of regularization. A mathematical study of the sources of bias in the estimates is given.For systems that cannot be modeled well by linear models, the seminal definition of NC performs poorly. This works gives examples in which nonlinear observation models cause NC values obtained with the seminal definition to behave contrary to intuitive expectations. A nonlinear extension of NC to all linear-in-parameters models is then developed and shown to address these limitations. The extension reduces to the seminal definition of NC for linear models and offers a flexible weighting mechanism to distribute contributions among nonlinear terms. The nonlinear extension is applied to a range of synthetic data and real EEG data with promising results.The sensitivity of NC to parameter estimation errors demands that special care be taken when using NC with nonlinear models. As a complement to nonlinear NC, enhancements to a algorithm for nonlinear parametric model estimation are presented. The algorithm combines a genetic search element for regressor selection with a set-theoretic optimal bounded ellipsoid algorithm for parameter estimation. The enhancements to the genetic search make use of sparsity and information theoretic measures to reduce the computational cost of the algorithm. Significant reductions are shown and direction for further improvements of the algorithm are given. The main contributions of this work are providing a method for estimating causal relationships between signals using nonlinear estimated models, and a framework for estimating the relationships using an enhanced algorithm for model structure search and parameter estimation.
Show less
- Title
- Novel Depth Representations for Depth Completion with Application in 3D Object Detection
- Creator
- Imran, Saif Muhammad
- Date
- 2022
- Collection
- Electronic Theses & Dissertations
- Description
-
Depth completion refers to interpolating a dense, regular depth grid from sparse and irregularly sampled depth values, often guided by high-resolution color imagery. The primary goal of depth completion is to estimate depth. In practice methods are trained by minimizing an error between predicted dense depth and ground-truth depth, and are evaluated by how well they minimize this error. Here we identify a second goal which is to avoid smearing depth across depth discontinuities. This second...
Show moreDepth completion refers to interpolating a dense, regular depth grid from sparse and irregularly sampled depth values, often guided by high-resolution color imagery. The primary goal of depth completion is to estimate depth. In practice methods are trained by minimizing an error between predicted dense depth and ground-truth depth, and are evaluated by how well they minimize this error. Here we identify a second goal which is to avoid smearing depth across depth discontinuities. This second goal is important because it can improve downstream applications of depth completion such as object detection and pose estimation. However, we also show that the goal of minimizing error can conflict with the goal of eliminating depth smearing.In this thesis, we propose two novel representations of depths that can encode depth discontinuity across object surfaces by allowing multiple depth estimation in the spatial domain. In order to learn these new representations, we propose carefully designed loss functions and show their effectiveness in deep neural network learning. We show how our representations can avoid inter-object depth mixing and also beat state of the art metrics for depth completion. The quality of ground-truth depth in real-world depth completion problems is another key challenge for learning and accurate evaluation of methods. Ground truth depth created from semi-automatic methods suffers from sparse sampling and errors at object boundaries. We show that the combination of these errors and the commonly used evaluation measure has promoted solutions that mix depths across boundaries in current methods. The thesis proposes alternate depth completion performance measures that reduce preference for mixed depths and promote sharp boundaries.The thesis also investigates whether additional points from depth completion methods can help in a challenging and high-level perception problem; 3D object detection. It shows the effect of different depth noises originated from depth estimates on detection performances and proposes some effective ways to reduce noise in the estimate and overcome architecture limitations. The method is demonstrated on both real-world and synthetic datasets.
Show less
- Title
- Novel simulation and data processing algorithms for eddy current inspection
- Creator
- Efremov, Anton
- Date
- 2020
- Collection
- Electronic Theses & Dissertations
- Description
-
Eddy Current Testing (ECT) is a widely used technique in the area of Nondestructive Evaluation. It offers a cheap, fast, non-contact way for finding surface and subsurface defects in a conductive material. Due to development of new designs of eddy current probe coils and advance of model based solutions to inverse problems in ECT, there is an emerging need for fast and accurate numerical methods for efficient modeling and processing of the data. This work contributes to the two directions of...
Show moreEddy Current Testing (ECT) is a widely used technique in the area of Nondestructive Evaluation. It offers a cheap, fast, non-contact way for finding surface and subsurface defects in a conductive material. Due to development of new designs of eddy current probe coils and advance of model based solutions to inverse problems in ECT, there is an emerging need for fast and accurate numerical methods for efficient modeling and processing of the data. This work contributes to the two directions of computational ECT: eddy current inspection simulation ("forward problem") and analysis of the measured data for automated defect detection ("inverse problem").A new approach to simulate low-frequency electromagnetics in 3D is presented, based on a combination of a frequency-domain reduced vector potential formulation with a boundary condition based on Dirichlet-to-Neumann operator. The equations are solved via a Finite Element Method (FEM), and a novel technique for the fast solution of the related linear system is proposed. The performance of the method is analyzed for a few representative ECT problems. The obtained numerical results are validated against analytic solutions, other simulation codes, and experimental data.The inverse problem of interpreting measured ECT data is also a significant challenge in many practical applications. Very often, the defect indication in a measurement is very subtle due to the large contribution from the geometry of the test sample, making defect detection very difficult. This thesis presents a novel approach to address this problem. The developed algorithm is applied to real problems of detecting defects under steel fasteners in aircraft geometry using 2D data obtained from a raster scan of a multilayer structure with a low frequency eddy current excitation and GMR (Giant Magnetoresistive) sensors. The algorithm is also applied to the data obtained from EC inspection of heat exchange tubes in nuclear power plant.
Show less
- Title
- Operation of interior permanent magnet synchronous machines with fractional slot concentrated windings under both healthy and faulty conditions
- Creator
- Foster, Shanelle Nicole
- Date
- 2013
- Collection
- Electronic Theses & Dissertations
- Description
-
Design for fault tolerance and early detection of insulation failure are critical for automotive and aerospace applications to ensure passenger safety. Permanent magnet machines can be designed to better withstand stator insulation failures. In this work, the performance of three fault tolerant fractional slot concentrated winding machine designs experiencing stator winding insulation failure are evaluated. Two of the machines are designed with double-layer windings and one with single-layer....
Show moreDesign for fault tolerance and early detection of insulation failure are critical for automotive and aerospace applications to ensure passenger safety. Permanent magnet machines can be designed to better withstand stator insulation failures. In this work, the performance of three fault tolerant fractional slot concentrated winding machine designs experiencing stator winding insulation failure are evaluated. Two of the machines are designed with double-layer windings and one with single-layer. The single-layer fractional slot concentrated winding design is shown most reliable; however this design has the worst torque performance. A ripple reduction control technique is developed based on an analytical description of torque. This technique is shown to improve the torque performance of the single-layer fractional slot design.Fault tolerant design alone does not provide high reliability since thermal stress from aging, overloading, cycling or fast switching of the inverter causes most stator insulation failures. Early detection of incipient stator winding faults could avoid catastrophic machine failure, allow implementation of mitigation techniques to continue operation, reduce the occurrence of secondary faults and allow adequate time to plan maintenance. In this work, two of the machines designed were manufactured with windings that allow the introduction of faults with three severity levels and varying degrees of incipient faults. Through a parametric identification method, the characteristic flux linkages of the machines are extracted under both healthy and faulty conditions. It is shown that incipient stator windings faults are reflected in the machine's characteristic parameters. These parametric changes are reflected in the phase voltage for current-controlled applications. Incipient stator winding faults can be detected online, if accurate knowledge of the healthy machine parameters is available.
Show less
- Title
- PULSE VOLUME SENSING AND ANALYSIS FOR ADVANCED BLOOD PRESSURE MONITORING
- Creator
- Natarajan, Keerthana
- Date
- 2021
- Collection
- Electronic Theses & Dissertations
- Description
-
Approximately a quarter of the world’s population is affected by high blood pressure (BP). It is a major risk factor for stroke and heart disease, which are leading causes of mortality. Management of hypertension could be improved by increased accuracy and convenience of BP measurement devices. Existing devices are not convenient or portable enough. In this work, we investigate three approaches to improve the accuracy and convenience of BP measurement. A physiologic method was developed to...
Show moreApproximately a quarter of the world’s population is affected by high blood pressure (BP). It is a major risk factor for stroke and heart disease, which are leading causes of mortality. Management of hypertension could be improved by increased accuracy and convenience of BP measurement devices. Existing devices are not convenient or portable enough. In this work, we investigate three approaches to improve the accuracy and convenience of BP measurement. A physiologic method was developed to further advance central BP measurement. A patient-specific method was applied to estimate brachial BP levels from a cuff pressure waveform obtained during conventional deflation via a nonlinear arterial compliance model. A physiologically-inspired method was then employed to extract the PVP waveform from the same waveform via ensemble averaging and calibrate it to the brachial BP levels. A method based on a wave reflection model was thereafter employed to define a variable transfer function, which was applied to the calibrated waveform to derive central BP. This method was evaluated against invasive central BP measurements from patients. The method yielded central systolic, diastolic, and pulse pressure bias and precision errors of −0.6 to 2.6 and 6.8 to 9.0 mmHg. The conventional oscillometric method produced similar bias errors but precision errors of 8.2 to 12.5 mmHg (p ≤ 0.01). The new method can derive central BP more reliably than some current non-invasive devices and in the same way as traditional cuff BP. We then developed an iPhone X application to measure cuff-less BP via the “oscillometric finger pressing method”. The user presses her fingertip on both the front camera and screen to increase the external pressure of the underlying artery, while the application measures the resulting variable-amplitude blood volume oscillations via the camera and applied pressure via the strain gauge array under the screen. The application also visually guides the fingertip placement and actuation and then computes BP from the measurements just like many automatic cuff devices. We tested the application, along with a finger cuff device, against a standard cuff device. The application yielded bias and precision errors of −4.0 and 11.4 mmHg for systolic BP and −9.4 and 9.7 mmHg for diastolic BP (n = 18). These errors were near the finger cuff device errors. This proof-of-concept study surprisingly indicates that cuff-less and calibration-free BP monitoring may be feasible with many existing and forthcoming smartphones. Finally, we developed easy-to-understand models relating PPG waveform features to BP changes (after a single cuff calibration) and determined conclusively whether they provide added value or not in BP measurement accuracy. Stepwise linear regression was employed so as to create parsimonious models for predicting the intervention-induced BP changes from popular PPG waveform features, pulse arrival time (PAT, time delay between ECG R-wave and PPG foot), and subject demographics. The finger b-time (PPG foot to minimum second derivative time) and ear STT (PPG amplitude divided by maximum derivative), when combined with PAT, reduced the systolic BP change prediction RMSE of reference models by 6-7% (p<0.022). The ear STT together with the pulse width reduced the diastolic BP change prediction RMSE of the reference model by 13% (p=0.003). Hence, PPG fast upstroke time intervals can offer some added value in cuff-less measurement of BP changes.
Show less
- Title
- Privacy Characterization and Quantification in Data Publishing
- Creator
- Ibrahim, Mohamed Hossam Afifi
- Date
- 2021
- Collection
- Electronic Theses & Dissertations
- Description
-
The increasing interest in collecting and publishing large amounts of individuals' data to public for purposes such as medical research, market analysis and economical measures has created major privacy concerns about their sensitive information. To deal with these concerns, many Privacy-Preserving Data Publishing (PPDP) schemes have been proposed in literature. However, they lack a proper privacy characterization. As a result, the existing schemes fail to provide reliable privacy loss...
Show moreThe increasing interest in collecting and publishing large amounts of individuals' data to public for purposes such as medical research, market analysis and economical measures has created major privacy concerns about their sensitive information. To deal with these concerns, many Privacy-Preserving Data Publishing (PPDP) schemes have been proposed in literature. However, they lack a proper privacy characterization. As a result, the existing schemes fail to provide reliable privacy loss quantification metrics and thus fail to correctly model the utility-privacy tradeoff. In this thesis, we first present a novel multi-variable privacy characterization model. Based on this model, we are able to analyze the prior and posterior adversarial beliefs about attribute values of individuals. Then we show that privacy should not be measured based on one metric. We demonstrate how this could result in privacy misjudgment. We propose two different metrics for quantification of privacy loss. Using these metrics and the proposed framework, we evaluate some of the most well-known PPDP techniques. The proposed metrics and data publishing framework are then used to build a negotiation-based data disclosure model to jointly address the utility requirements of the Data User (DU) and the privacy and, possibly, the monetary requirements of the Data Owner (DO). The data utility is re-defined based on the DU's rather than the DO's perspective. Based on the proposed model, we present two data disclosure scenarios that satisfy a given privacy constraint while achieving the DU's required data utility level. The variation in a DO's flat or variable monetary rate objective motivates the data disclosure scenarios. This model fills the gap between the existing theoretical work and the ultimate goal of practicality.The data publisher is required to provide guarantees that users' records cannot be de-identified from datasets. This reflects directly on the levels of data generalization and techniques by which data is anonymized. While Machine Learning (ML), one of the most revolutionary technologies nowadays, relies mainly on data, it is unfortunate that the more generalized the data is, the less accurate the ML model becomes. Although this is a well understood fact, we lack a model that quantifies such degradation in ML models' accuracy, as a consequence to the privacy constraints. To model this tradeoff, we provide the first framework to quantify, not only the privacy losses in data publishing, but also the utility losses in machine learning applications as a result of meeting the privacy constraints. To further expand our research and reflect its applicability to real industry applications, the proposed tradeoff management framework is then applied on a large-scale employee dataset from Barracuda Networks, a leader cybersecurity company. A privacy-preserving Account Takeover (ATO) detection algorithm is then proposed to predict the fraudulence of email account logins and thus detect possible ATO attacks. The results express variations in models' accuracy in binary classification of logins when trained on different datasets that satisfy different privacy constraints. The proposed framework enables a data owner to quantitatively manage the utility-privacy tradeoff and provide deeper insights about the value of the released data as well as the potential privacy losses upon publishing.
Show less
- Title
- Quantitative methods for calibrated spatial measurements of laryngeal phonatory mechanisms
- Creator
- Ghasemzadeh, Hamzeh
- Date
- 2020
- Collection
- Electronic Theses & Dissertations
- Description
-
The ability to perform measurements is an important cornerstone and the prerequisite of any quantitative research. Measurements allow us to quantify inputs and outputs of a system, and then to express their relationships using concise mathematical expressions and models. Those models would then enable us to understand how a target system works and to predict its output for changes in the system parameters. Conversely, models would enable us to determine the proper parameters of a system for...
Show moreThe ability to perform measurements is an important cornerstone and the prerequisite of any quantitative research. Measurements allow us to quantify inputs and outputs of a system, and then to express their relationships using concise mathematical expressions and models. Those models would then enable us to understand how a target system works and to predict its output for changes in the system parameters. Conversely, models would enable us to determine the proper parameters of a system for achieving a certain output. Putting these in the context of voice science research, variations in the parameters of the phonatory system could be attributed to individual differences. Thus, accurate models would enable us to account for individual differences during the diagnosis and to make reliable predictions about the likely outcome of different treatment options. Analysis of vibration of the vocal folds using high-speed videoendoscopy (HSV) could be an ideal candidate for constructing computational models. However, conventional images are not spatially calibrated and cannot be used for absolute spatial measurements. This dissertation is focused on developing the required methodologies for calibrated spatial measurements from in-vivo HSV recordings. Specifically, two different approaches for calibrated horizontal measurements of HSV images are presented. The first approach is called the indirect approach, and it is based on the registration of a specific attribute of a common object (e.g. size of a lesion) from a calibrated intraoperative still image to its corresponding non-calibrated in-vivo HSV recording. This approach does not require specialized instruments and can be implemented in many clinical settings. However, its validity depends on a couple of assumptions. Violation of those assumptions could lead to significant measurement errors. The second approach is called the direct approach, and it is based on a laser-projection flexible fiberoptic endoscope. This approach would enable us to make accurate calibrated spatial measurements. This dissertation evaluates the accuracy of the first approach indirectly, and by studying its underlying fundamental assumptions. However, the accuracy of the second approach is evaluated directly, and using benchtop experiments with different surfaces, different working distances, and different imaging angles. The main significances and contributions of this dissertation are the following: (1) a formal treatment of indirect horizontal calibration is presented, and the assumptions governing its validity and reliability are discussed. A battery of tests is presented that can indirectly assess the validity of those assumptions in laryngeal imaging applications; (2) recordings from pre- and post-surgery from patients with vocal fold mass lesions are used as a testbench for the developed indirect calibration approach. In that regard, a full solution is developed for measuring the calibrated velocity of the vocal folds. The developed solution is then used to investigate post-surgery changes in the closing velocity of the vocal folds from patients with vocal fold mass lesions; (3) the method for calibrated vertical measurement from a laser-projection fiberoptic flexible endoscope is developed. The developed method is evaluated at different working distances, different imaging angles, and on a 3D surface; (4) a detailed analysis and investigation of non-linear image distortion of a fiberoptic flexible endoscope is presented. The effect of imaging angle and spatial location of an object on the magnitude of that distortion is studied and quantified; (5) the method for calibrated horizontal measurement from a laser-projection fiberoptic flexible endoscope is developed. The developed method is evaluated at different working distances, different imaging angles, and on a 3D surface.
Show less
- Title
- REACTIVE ION ENHANCED MAGNETRON SPUTTERING OF NITRIDE THIN FILMS
- Creator
- Talukder, Al-Ahsan
- Date
- 2022
- Collection
- Electronic Theses & Dissertations
- Description
-
Magnetron sputtering is a popular vacuum plasma coating technique used for depositing metals, dielectrics, semiconductors, alloys, and compounds onto a wide range of substrates. In this work, we present two popular types of magnetron sputtering, i.e., pulsed DC and RF magnetron sputtering, for depositing piezoelectric aluminum nitride (AlN) thin films with high Young’s modulus. The effects of important process parameters on the plasma I-V characteristics, deposition rate, and the properties...
Show moreMagnetron sputtering is a popular vacuum plasma coating technique used for depositing metals, dielectrics, semiconductors, alloys, and compounds onto a wide range of substrates. In this work, we present two popular types of magnetron sputtering, i.e., pulsed DC and RF magnetron sputtering, for depositing piezoelectric aluminum nitride (AlN) thin films with high Young’s modulus. The effects of important process parameters on the plasma I-V characteristics, deposition rate, and the properties of the deposited AlN films, are studied comprehensively. The effects of these process parameters on Young’s modulus of the deposited films are also presented. Scanning electron microscope imaging revealed a c-axis oriented columnar growth of AlN. Performance of surface acoustic devices, utilizing the AlN films deposited by magnetron sputtering, are also presented, which confirms the differences in qualities and microstructures of the pulsed DC and RF sputtered films. The RF sputtered AlN films showed a denser microstructure with smaller grains and a smoother surface than the pulsed DC sputtered films. However, the deposition rate of RF sputtering is about half of the pulsed DC sputtering process. We also present a novel ion source enhanced pulsed DC magnetron sputtering for depositing high-quality nitrogen-doped zinc telluride (ZnTe:N) thin films. This ion source enhanced magnetron sputtering provides an increased deposition rate, efficient N-doping, and improved electrical, structural, and optical properties than the traditional magnetron sputtering. Ion source enhanced deposition leads to ZnTe:N films with smaller lattice spacing and wider X-ray diffraction peak, which indicates denser films with smaller crystallites embedded in an amorphous matrix.
Show less
- Title
- ROBUST HYSTERESIS COMPENSATION FOR NANOPOSITIONING CONTROL
- Creator
- Al-Nadawi, Yasir Khudhair
- Date
- 2021
- Collection
- Electronic Theses & Dissertations
- Description
-
Piezoelectric and other smart material-based actuators are widely used in micro- and nano positioning applications. However, the intrinsic hysteretic behavior of these actuators deteriorates their tracking performance. This dissertation, composed of three parts, is focused on nonlinear control methods for compensating the hysteresis and achieving high-precision control in the presence of model uncertainties. An inversion-based adaptive conditional servocompensator (ACS) is first proposed,...
Show morePiezoelectric and other smart material-based actuators are widely used in micro- and nano positioning applications. However, the intrinsic hysteretic behavior of these actuators deteriorates their tracking performance. This dissertation, composed of three parts, is focused on nonlinear control methods for compensating the hysteresis and achieving high-precision control in the presence of model uncertainties. An inversion-based adaptive conditional servocompensator (ACS) is first proposed, where a nanopositioning system represented as a linear system preceded with a hysteresis nonlinearity modeled with a Modified Prandtl-Ishlinskii (MPI) operator. With an approximate inverse MPI operator as a compensator, the resulting system takes a semi-affine form. The proposed controller consists of two parts, a continuously-implemented sliding mode control (SMC) law followed by an ASC. The hysteresis inversion error is treated as a matched disturbance and its analytical bound is used to minimize the conservativeness of the SMC design. Under mild assumptions, the well-posedness and periodic stability of the closed-loop system are established. The second part of the dissertation focuses on designing an inversion-free ACS to achieve precise tracking control of systems with hysteresis, without requiring explicit inversion of the hysteresis. To facilitate the control design, the MPI operator is rearranged into a form comprised of three parts: a linear term, a nominal hysteretic term represented by a classical Prandtl-Ishlinskii (PI) operator, and a hysteretic perturbation. The bound on the hysteretic perturbation is further derived based on the parameter uncertainty of the MPI operator. To properly ``cancel'' the nominal hysteresis effect without inversion, a technique involving a low-pass filter is introduced. It is shown that, with persistent excitation, the closed-loop variables are ultimately bounded and the tracking error approaches a neighborhood of zero, where the neighborhood can be made arbitrarily small via the choice of the SMC boundary layer width parameter and the servocompensator order. In the third part, an output feedback-based hysteresis compensation approach is used using dynamic inversion and extended high-gain observers. With mild assumptions on the properties of the hysteresis nonlinearity, the system can be represented as an uncertain, non-affine, nonlinear system containing a hysteretic perturbation. Dynamic inversion is used to deal with the non-affine input, uncertainties, and the hysteretic perturbation, where the latter two are estimated using an extended high-gain observer. Analysis of the closed-loop system under output feedback shows that the tracking error converges to a small neighborhood near the origin, which can be made arbitrarily small via proper choice of time-scale parameters of dynamic inversion and the observer, respectively. The efficacy of the three proposed controllers is verified experimentally on a commercial nanopositioning device under different types of periodic reference inputs, via comparison with multiple inversion-based and inversion-free approaches.
Show less
- Title
- Reliable and efficient communications in wireless sensor networks
- Creator
- Abdelhakim, Mai M.
- Date
- 2014
- Collection
- Electronic Theses & Dissertations
- Description
-
Wireless sensor network (WSN) is a key technology for a wide range of military and civilian applications. Limited by the energy resources and processing capabilities of the sensor nodes, reliable and efficient communications in wireless sensor networks are challenging, especially when the sensors are deployed in hostile environments. This research aims to improve the reliability and efficiency of time-critical communications in WSNs, under both benign and hostile environments.We start with...
Show moreWireless sensor network (WSN) is a key technology for a wide range of military and civilian applications. Limited by the energy resources and processing capabilities of the sensor nodes, reliable and efficient communications in wireless sensor networks are challenging, especially when the sensors are deployed in hostile environments. This research aims to improve the reliability and efficiency of time-critical communications in WSNs, under both benign and hostile environments.We start with wireless sensor network with mobile access points (SENMA), where the mobile access points traverse the network to collect information from individual sensors. Due to its routing simplicity and energy efficiency, SENMA has attracted lots of attention from the research community. Here, we study reliable distributed detection in SENMA under Byzantine attacks, where some authenticated sensors are compromised to report fictitious information. The q-out-of-m rule is considered. It is popular in distributed detection and can achieve a good trade-off between the miss detection probability and the false alarm rate. However, a major limitation with this rule is that the optimal scheme parameters can only be obtained through exhaustive search. By exploiting the linear relationship between the scheme parameters and the network size, we propose simple but effective sub-optimal linear approaches. Then, for better flexibility and scalability, we derive a near-optimal closed-form solution based on the central limit theorem. It is proved that the false alarm rate of the q-out-of-m scheme diminishes exponentially as the network size increases, even if the percentage of malicious nodes remains fixed. This implies that large-scale sensor networks are more reliable under malicious attacks. To further improve the performance under time-varying attacks, we propose an effective malicious node detection scheme for adaptive data fusion; the proposed scheme is analyzed using the entropy-based trust model, and has shown to be optimal from the information theory point of view.Next, we observe that: while simplifying the routing process, a major limitation with SENMA is that data transmission is limited by the physical speed of the mobile access points (MAs) and the length of their trajectory, resulting in low throughput and large delay. To solve this problem, we propose a novel mobile access coordinated wireless sensor network (MC-WSN) architecture. The proposed MC-WSN can provide reliable and time-sensitive information exchange through hop number control, which is achieved by active network development and topology design. We discuss the optimal topology design for MC-WSN such that the average number of hops between the source and its nearest sink is minimized, and analyze the performance of MC-WSN in terms of throughput, stability, delay, and energy efficiency by exploiting tools in information theory, queuing theory, and radio energy dissipation model. It is shown that MC-WSN achieves much higher throughput and significantly lower delay and energy consumption than that of SENMA.Finally, motivated by the observation that the number of hops in data transmission has a direct impact on the network performance, we introduce the concept of the N-hop networks. Based on the N-hop concept, we propose a unified framework for wireless networks and discuss general network design criteria. The unified framework reflects the convergence of centralized and ad-hoc networks. It includes all exiting network models as special cases, and makes the analytical characterization of the network performance more tractable. Further study on N-hop networks will be conducted in our future research.
Show less
- Title
- Robotic fish : development, modeling, and application to mobile sensing
- Creator
- Wang, Jianxun (Mechatronic engineer)
- Date
- 2014
- Collection
- Electronic Theses & Dissertations
- Description
-
Robotic fish are underwater robots that emulate locomotion of live fish through actuated fin and/or body movements. They are of increasing interest due to their potential applications such as aquatic environmental monitoring and robot-animal interactions.In this work, several bio-inspired robotic fish prototypes have been developed that make use of periodic tail motions. A dynamic model for a tail-actuated robotic fish is presented by merging rigid-body dynamics with Lighthill's large...
Show moreRobotic fish are underwater robots that emulate locomotion of live fish through actuated fin and/or body movements. They are of increasing interest due to their potential applications such as aquatic environmental monitoring and robot-animal interactions.In this work, several bio-inspired robotic fish prototypes have been developed that make use of periodic tail motions. A dynamic model for a tail-actuated robotic fish is presented by merging rigid-body dynamics with Lighthill's large-amplitude elongated-body theory. The model is validated with extensive experiments conducted on a robotic fish prototype. The role of incorporating the body motion in evaluating the tail-generated hydrodynamic forces is assessed, which shows that ignoring the body motion (as often done in the literature) results in significant overestimate of the thrust force and robot speed. By exploiting the strong correlation between the angle of attack and the tail-beat bias, a computationally efficient approach is further proposed to adapt the drag coefficients of the robotic fish.It has been recognized that the flexibility of the body and fin structures has a pronounced impact on the swimming performance of biological and robotic fish. To analyze and utilize this trait, a novel dynamic model is developed for a robotic fish propelled by a flexible tail actuated at the base. The tail is modeled with multiple rigid segments connected in series through rotational springs and dampers. For comparison, a model using linear beam theory is created to capture the beam dynamics. Experimental result show that the two models have almost identical predictions when the tail undergoes small deformation, but only the proposed multi-segment model matches the experimental measurement closely for all tail motions.Motivated by the need for system analysis and efficient control of robotic fish, averaging of robots' dynamics is of interest. For dynamic models of robotic fish, however, classical or geometric averaging typically cannot produce an average model that is accurate and the in the meantime amenable to analysis or control design. In this work, a novel averaging approach for tail-actuated robotic fish dynamics is proposed. The approach consists of scaling the force and moment terms and then conducting classical averaging. Numerical investigation reveals that the scaling function for the force terms is a constant independent of tail-beat patterns, while the scaling function for the moment term depends linearly on the tail-beat bias. Existence and local stability of the equilibria for the average model are further analyzed. Finally, as an illustration of the utility of the average model, a semi-analytical framework is presented for obtaining steady turning parameters.Sampling and reconstruction of a physical field using mobile sensor networks have recently received significant interest. In this work, an adaptive sampling framework is proposed to reconstruct aquatic environmental fields (e.g., temperature, or biomass of harmful algal blooms) using schools of robotic sensor platforms. In particular, it is assumed that the field of interest can be approximated by a low rank matrix, which is exploited for successive expansion of sampling area and analytical reconstruction of the field. For comparison, an Augmented Lagrange Multiplier optimization approach is also taken to complete the matrix reconstruction using a limited number of samples. Simulation results show that the proposed approach is more computationally efficient and requires shorter travel distances for the robots.
Show less
- Title
- SENSOR AND SENSORLESS SPEED CONTROL OF PERMANENT MAGNET SYNCHRONOUS MOTOR USING EXTENDED HIGH-GAIN OBSERVER
- Creator
- Alfehaid, Abdullah Ahmad
- Date
- 2021
- Collection
- Electronic Theses & Dissertations
- Description
-
Control of the speed as well as shaping the speed transient response of a surface-mounted Permanent Magnet Synchronous Motor (PMSM) is achieved using the method of feedback linearization and extended high-gain observer. To recover the performance of feedback linearization, an extended high-gain observer is utilized to estimate both the speed of the motor and the disturbance present in the system. The observer is designed based on a reduced model of the PMSM, which is realized through the...
Show moreControl of the speed as well as shaping the speed transient response of a surface-mounted Permanent Magnet Synchronous Motor (PMSM) is achieved using the method of feedback linearization and extended high-gain observer. To recover the performance of feedback linearization, an extended high-gain observer is utilized to estimate both the speed of the motor and the disturbance present in the system. The observer is designed based on a reduced model of the PMSM, which is realized through the application of singular perturbation theory. The motor parameters are assumed uncertain and we only assume knowledge of their nominal values. The external load torque is also assumed to be unknown and time-varying, but bounded. Stability analysis of the output feedback system is given. Experimental results confirm the performance and robustness of the proposed controller. We also compare our proposed control method to the cascaded Proportional Integral (PI) speed controller. Then, we show the extension of this control method to solve the problem of sensorless control of PMSMs. The proposed sensorless control method is a back-emf based control scheme. Therefore, we design a high-gain back-emf observer in the α-β coordinates. Next, we transform the model of the PMSM to the d-q coordinates, which is performed using the estimated position, and close the loop around the currents with relatively fast PI controllers. After that, we reduce the model of the PMSM and design a third order Q-PLL extended high-gain observer as well as the speed feedback controller. Then, we perform a rigorous stability analysis of the closed loop system. Finally, we show simulation and experimental results to verify performance and robustness of the proposed controller.
Show less
- Title
- Safe Control Design for Uncertain Systems
- Creator
- Marvi, Zahra
- Date
- 2021
- Collection
- Electronic Theses & Dissertations
- Description
-
This dissertation investigates the problem of safe control design for systems under model and environmental uncertainty. Reinforcement learning (RL) provides an interactive learning framework in which the optimal controller is sequentially derived based on instantaneous reward. Although powerful, safety consideration is a barrier to the wide deployment of RL algorithms in practice. To overcome this problem, we proposed an iterative safe off-policy RL algorithm. The cost function that encodes...
Show moreThis dissertation investigates the problem of safe control design for systems under model and environmental uncertainty. Reinforcement learning (RL) provides an interactive learning framework in which the optimal controller is sequentially derived based on instantaneous reward. Although powerful, safety consideration is a barrier to the wide deployment of RL algorithms in practice. To overcome this problem, we proposed an iterative safe off-policy RL algorithm. The cost function that encodes the designer's objectives is augmented with a control barrier function (CBF) to ensure safety and optimality. The proposed formulation provides a look-ahead and proactive safety planning, in which the safety is planned and optimized along with the performance to minimize the intervention with the optimal controller. Extensive safety and stability analysis is provided and the proposed method is implemented using the off-policy algorithm without requiring complete knowledge about the system dynamics. This line of research is then further extended to have a safety and stability guarantee even during the data collection and exploration phases in which random noisy inputs are applied to the system. However, satisfying the safety of actions when little is known about the system dynamics is a daunting challenge. We present a novel RL scheme that ensures the safety and stability of the linear systems during the exploration and exploitation phases. This is obtained by having a concurrent model learning and control, in which an efficient learning scheme is employed to prescribe the learning behavior. This characteristic is then employed to apply only safe and stabilizing controllers to the system. First, the prescribed errors are employed in a novel adaptive robustified control barrier function (AR-CBF) which guarantees that the states of the system remain in the safe set even when the learning is incomplete. Therefore, the noisy input in the exploratory data collection phase and the optimal controller in the exploitation phase are minimally altered such that the AR-CBF criterion is satisfied and, therefore, safety is guaranteed in both phases. It is shown that under the proposed prescribed RL framework, the model learning error is a vanishing perturbation to the original system. Therefore, a stability guarantee is also provided even in the exploration when noisy random inputs are applied to the system. A learning-enabled barrier-certified safe controllers for systems that operate in a shared and uncertain environment is then presented. A safety-aware loss function is defined and minimized to learn the uncertain and unknown behavior of external agents that affect the safety of the system. The loss function is defined based on safe set error, instead of the system model error, and is minimized for both current samples as well as past samples stored in the memory to assure a fast and generalizable learning algorithm for approximating the safe set. The proposed model learning and CBF are then integrated together to form a learning-enabled zeroing CBF (L-ZCBF), which employs the approximated trajectory information of the external agents provided by the learned model but shrinks the safety boundary in case of an imminent safety violation using instantaneous sensory observations. It is shown that the proposed L-ZCBF assures the safety guarantees during learning and even in the face of inaccurate or simplified approximation of external agents, which is crucial in highly interactive environments. Finally, the cooperative capability of agents in a multi-agent environment is investigated for the sake of safety guarantee. CBFs and information-gap theory are integrated to have robust safe controllers for multi-agent systems with different levels of measurement accuracy. A cooperative framework for the construction of CBFs for every two agents is employed to maximize the horizon of uncertainty under which the safety of the overall system is satisfied. The information-gap theory is leveraged to determine the contribution and share of each agent in the construction of CBFs. This results in the highest possible robustness against measurement uncertainty. By employing the proposed approach in constructing CBF, a higher horizon of uncertainty can be safely tolerated and even the failure of one agent in gathering accurate local data can be compensated by cooperation between agents. The effectiveness of the proposed methods is extensively examined in simulation results.
Show less
- Title
- Secure communication system design for wireless networks
- Creator
- Ling, Qi
- Date
- 2007
- Collection
- Electronic Theses & Dissertations
- Title
- Soft Pressure Sensing System with Application to Underwater Sea Lamprey Detection
- Creator
- Shi, Hongyang
- Date
- 2022
- Collection
- Electronic Theses & Dissertations
- Description
-
Species-specific monitoring offers fundamental tools for natural resource management and conservation but requires techniques that target species-specific traits or markers. Sea lamprey, a destructive invasive species in the Great Lakes in North America and conservation target in Europe, is among very few fishes that rely on oral suction during migration and spawning. Yet attachment by suction has not been exploited for sea lamprey control or conservation. This dissertation is focused on...
Show moreSpecies-specific monitoring offers fundamental tools for natural resource management and conservation but requires techniques that target species-specific traits or markers. Sea lamprey, a destructive invasive species in the Great Lakes in North America and conservation target in Europe, is among very few fishes that rely on oral suction during migration and spawning. Yet attachment by suction has not been exploited for sea lamprey control or conservation. This dissertation is focused on advancing soft pressure sensing systems for underwater sea lamprey detection.First, a pressure sensing panel based on commercial vacuum sensors is developed to measure the suction dynamics of juvenile and adult sea lampreys, such as pressure amplitude, frequency and suction duration. Measurements from an array of sensors indicate that the suction pressure distribution is largely uniform across the mouths of lampreys, and the suction pressure does not differ between static and flowing water conditions when the water velocity is lower than 0.45 m/s. Such biological information could inform the design of new systems to monitor behavior, distribution and abundance of lampreys. Based on the measured biological information, two types of soft pressure sensors are proposed for underwater sea lamprey detection. First, a soft capacitive pressure sensor is developed, which is made using a low-cost screen-printing process and can reliably detect both positive and negative pressures. The sensor is made with a soft dielectric layer and stretchable conductive polymer electrodes. Air gaps are designed and incorporated into the dielectric layer to significantly enhance the sample deformation and the response to pressures especially negative pressure. This soft capacitive pressure sensor can successfully detect non-conductive objects like plastic blocks compressed against it or rubber suction cup attached to it; however, it does not work well underwater since water causes parasitic capacitance on the sensor that interferes with the detection.The second sensor we present is a low-cost and efficient piezoresistive pressure sensor, which consists of a layer of piezoresistive film patch matrix sandwiched between two layers of perpendicular copper tape electrodes. Here, the measured two-point resistance is not equal to the actual cell resistance for that pixel due to the cross-talk effect of the pixels. Several regularized least-squares algorithms are proposed to reconstruct the cell resistance map from the two-point resistance measurements. Experiments show that this pressure sensor is able to capture the pressure profiles during sea lamprey attachment. The performance and computational complexity of the reconstruction algorithms with different regularization functions are also compared.Finally, we design an automated sea lamprey detection system based on the piezoresistive pressure sensor array using machine learning. Three types of object detection algorithms are deployed to learn features of the mapping contours when effective attachment by ''compression'' or ''suction'' is formed on the sensor array. Their validation performance and inference speeds are evaluated and compared in depth, and YOLOv5s proves to be the best detector. Furthermore, a detection approach based on the YOLOv5s model with a confidence filter unit, is proposed. In particular, different optimal detection thresholds are proposed for the compression and suction patterns, respectively, in order to reduce the false positive rate caused by the sensor's memory effect. The efficacy of the proposed method is supported with experimental results on real-time underwater detection of sea lampreys.
Show less
- Title
- TENSOR LEARNING WITH STRUCTURE, GEOMETRY AND MULTI-MODALITY
- Creator
- Sofuoglu, Seyyid Emre
- Date
- 2022
- Collection
- Electronic Theses & Dissertations
- Description
-
With the advances in sensing and data acquisition technology, it is now possible to collect datafrom different modalities and sources simultaneously. Most of these data are multi-dimensional in nature and can be represented by multiway arrays known as tensors. For instance, a color image is a third-order tensor defined by two indices for spatial variables and one index for color mode. Some other examples include color video, medical imaging such as EEG and fMRI, spatiotemporal data...
Show moreWith the advances in sensing and data acquisition technology, it is now possible to collect datafrom different modalities and sources simultaneously. Most of these data are multi-dimensional in nature and can be represented by multiway arrays known as tensors. For instance, a color image is a third-order tensor defined by two indices for spatial variables and one index for color mode. Some other examples include color video, medical imaging such as EEG and fMRI, spatiotemporal data encountered in urban traffic monitoring, etc.In the past two decades, tensors have become ubiquitous in signal processing, statistics andcomputer science. Traditional unsupervised and supervised learning methods developed for one- dimensional signals do not translate well to higher order data structures as they get computationally prohibitive with increasing dimensionalities. Vectorizing high dimensional inputs creates problems in nearly all machine learning tasks due to exponentially increasing dimensionality, distortion of data structure and the difficulty of obtaining sufficiently large training sample size.In this thesis, we develop tensor-based approaches to various machine learning tasks. Existingtensor based unsupervised and supervised learning algorithms extend many well-known algorithms, e.g. 2-D component analysis, support vector machines and linear discriminant analysis, with better performance and lower computational and memory costs. Most of these methods rely on Tucker decomposition which has exponential storage complexity requirements; CANDECOMP-PARAFAC (CP) based methods which might not have a solution; or Tensor Train (TT) based solutions which suffer from exponentially increasing ranks. Many tensor based methods have quadratic (w.r.t the size of data), or higher computational complexity, and similarly, high memory complexity. Moreover, existing tensor based methods are not always designed with the particular structure of the data in mind. Many of the existing methods use purely algebraic measures as their objective which might not capture the local relations within data. Thus, there is a necessity to develop new models with better computational and memory efficiency, with the particular structure of the data and problem in mind. Finally, as tensors represent the data with more faithfulness to the original structure compared to the vectorization, they also allow coupling of heterogeneous data sources where the underlying physical relationship is known. Still, most of the current work on coupled tensor decompositions does not explore supervised problems.In order to address the issues around computational and storage complexity of tensor basedmachine learning, in Chapter 2, we propose a new tensor train decomposition structure, which is a hybrid between Tucker and Tensor Train decompositions. The proposed structure is used to imple- ment Tensor Train based supervised and unsupervised learning frameworks: linear discriminant analysis (LDA) and graph regularized subspace learning. The algorithm is designed to solve ex- tremal eigenvalue-eigenvector pair computation problems, which can be generalized to many other methods. The supervised framework, Tensor Train Discriminant Analysis (TTDA), is evaluated in a classification task with varying storage complexities with respect to classification accuracy and training time on four different datasets. The unsupervised approach, Graph Regularized TT, is evaluated on a clustering task with respect to clustering quality and training time on various storage complexities. Both frameworks are compared to discriminant analysis algorithms with similar objectives based on Tucker and TT decompositions.In Chapter 3, we present an unsupervised anomaly detection algorithm for spatiotemporaltensor data. The algorithm models the anomaly detection problem as a low-rank plus sparse tensor decomposition problem, where the normal activity is assumed to be low-rank and the anomalies are assumed to be sparse and temporally continuous. We present an extension of this algorithm, where we utilize a graph regularization term in our objective function to preserve the underlying geometry of the original data. Finally, we propose a computationally efficient implementation of this framework by approximating the nuclear norm using graph total variation minimization. The proposed approach is evaluated for both simulated data with varying levels of anomaly strength, length and number of missing entries in the observed tensor as well as urban traffic data. In Chapter 4, we propose a geometric tensor learning framework using product graph structures for tensor completion problem. Instead of purely algebraic measures such as rank, we use graph smoothness constraints that utilize geometric or topological relations within data. We prove the equivalence of a Cartesian graph structure to TT-based graph structure under some conditions. We show empirically, that introducing such relaxations due to the conditions do not deteriorate the recovery performance. We also outline a fully geometric learning method on product graphs for data completion.In Chapter 5, we introduce a supervised learning method for heterogeneous data sources suchas simultaneous EEG and fMRI. The proposed two-stage method first extracts features taking the coupling across modalities into account and then introduces kernelized support tensor machines for classification. We illustrate the advantages of the proposed method on simulated and real classification tasks with small number of training data with high dimensionality.
Show less
- Title
- THEORETICAL MODELING OF ULTRAFAST OPTICAL-FIELD INDUCED PHOTOELECTRON EMISSION FROM BIASED METAL SURFACES
- Creator
- Luo, Yi
- Date
- 2021
- Collection
- Electronic Theses & Dissertations
- Description
-
Laser-induced electron emission from nanostructures offers a platform to coherently control electron dynamics in ultrashort spatiotemporal scales, making it important to both fundamental research and a broad range of applications, such as to ultrafast electron microscopy, diffraction, attosecond electronics, strong-field nano-optics, tabletop particle accelerators, free electron lasers, and novel nanoscale vacuum devices. This thesis analytically studies nonlinear ultrafast photoelectron...
Show moreLaser-induced electron emission from nanostructures offers a platform to coherently control electron dynamics in ultrashort spatiotemporal scales, making it important to both fundamental research and a broad range of applications, such as to ultrafast electron microscopy, diffraction, attosecond electronics, strong-field nano-optics, tabletop particle accelerators, free electron lasers, and novel nanoscale vacuum devices. This thesis analytically studies nonlinear ultrafast photoelectron emission from biased metal surfaces, by solving the time-dependent Schrödinger equation exactly. Our study provides better understanding of the ultrafast control of electrons and offers useful guidance for the future design of ultrafast nanoelectronics. First, we present an analytical model for photoemission driven by two-color laser fields. We study the electron energy spectra and emission current modulation under various laser intensities, frequencies, and relative phase between the two lasers. We find strong modulation for both the energy spectra and emission current (with a modulation depth up to 99%) due to the interference effect of the two-color lasers. Using the same input parameter, our theoretical prediction for the photoemission current modulation depth (93.9%) is almost identical to the experimental measurement (94%). Next, to investigate the role of dc field, we construct an analytical model for two-color laser induced photoemission from dc biased metal surfaces. We systematically examine the combined effects of a dc electric field and two-color laser fields. We find the strong modulation in two-color photoemission persists even with a strong dc electric field. In addition, the dc field opens up more tunneling emission channels and thus increases the total emission current. Application of our model to time-resolved photoelectron spectroscopy is also demonstrated, showing the dynamics of the n-photon excited states depends strongly on the applied dc field. We then propose to utilize two lasers of the same frequency to achieve the interference modulation of photoemission by their relative phase. This is motivated by the easier access to single-frequency laser pairs than two-color lasers in experiments. We find a strong current modulation (> 90%) can be achieved with a moderate ratio of the laser fields (< 0.4) even under a strong dc bias. Our study demonstrates the capability of measuring the time-resolved photoelectron energy spectra using single-frequency laser pairs. We further extend our exact analytic model to photoelectron emission induced by few-cycle laser pulses. The single formulation is valid from photon-driven electron emission in low intensity optical fields to field-driven emission in high intensity optical fields, and is valid for arbitrary pulse length from sub-cycle to CW excitation, and for arbitrary pulse repetition rate. We find the emitted charge per pulse oscillatorily increases with pulse repetition rate, due to varying coherent interaction of neighboring laser pulses. For a well-separated single pulse, our results recover the experimentally observed vanishing carrier-envelope phase sensitivity in the optical-field regime. We also find that applying a large dc field to the photoemitter is able to greatly enhance the photoemission current and in the meantime substantially shorten the current pulse. Finally, we construct analytical models for nonlinear photoelectron emission in a nanoscale metal-vacuum-metal gap. Our results reveal the energy redistribution of photoelectrons across the two interfaces between the gap and the metals. Additionally, we find that decreasing the gap distance tends to extend the multiphoton regime to higher laser intensity. The effect of dc bias is also studied in detail.
Show less
- Title
- TRANSMITTER DESIGNS FOR SUB-6 GHZ AND MILLIMETER WAVE BANDS
- Creator
- Hung, Shih-Chang
- Date
- 2021
- Collection
- Electronic Theses & Dissertations
- Description
-
The rapid growth data streaming demand in modern communication systems makes unprecedented challenges for wireless service providers. Along with the growth data streaming demand, the steady development of wireless application causes an issue of wireless coexistence with limited frequency spectrum. Therefore, the energy and spectrum efficient transmitters with higher data rate, small die area, and low integration cost are demanded for the modern communication systems. The fifth generation...
Show moreThe rapid growth data streaming demand in modern communication systems makes unprecedented challenges for wireless service providers. Along with the growth data streaming demand, the steady development of wireless application causes an issue of wireless coexistence with limited frequency spectrum. Therefore, the energy and spectrum efficient transmitters with higher data rate, small die area, and low integration cost are demanded for the modern communication systems. The fifth generation mobile network emerges as a promising revolution in mobile communications with higher data rates. In 5G mobile network, two kind of frequency bands, sub-6 GHz bands and millimeter-wave bands above 24 GHz, are classified. Due to the densely packed spectrum in sub-6 GHz bands, the transmitter designs have focused on improving spectral efficiency with frequency-localized waveforms such as Orthogonal Frequency Division Multiplexing. However, the OFDM signal has a major drawback of high peak-to-average-ratio, which results in degraded power efficiency of the transmitter. On the other hand, to support higher data rate, mm-wave bands can support bandwidths up to 2 GHz without aggregating bands together. However, at mm-wave bands, the maximum range to sustain reliable wireless links is decreased due the increasing pass loss. Fortunately, the phased array techniques, which have been used for defense and satellite applications for many years, enables the directive communications could be a promising solution to overcome the challenges. The objective of this dissertation is to present novel topologies for transmitter designs, which are suitable for modern communication systems in both sub-6 GHz and mm-wave bands. For sub-6 GHz bands, an efficient quadrature digital power amplifier as a standalone transmitter has been proposed. The proposed transmitter employs complex-domain Doherty and dual-supply Class-G to achieve up to four efficiency peaks and excellent system efficiency at power back-off. For mm-wave bands, an 18–50-GHz mm-wave transmitter has been proposed. The proposed mm-wave transmitter is designed for low power consumption, a small area, and supports emerging multibeam communications and directional sensing with an increased number of phased array elements from 18–50 GHz.
Show less
- Title
- TRANSPARENT MICROELECTRODES FOR ELECTROPHYSIOLOGICAL RECORDING AND ELECTROCHEMICAL SENSING
- Creator
- Yang, Weiyang
- Date
- 2021
- Collection
- Electronic Theses & Dissertations
- Description
-
Indium Tin Oxide (ITO) is a well-known n-type semiconductor material that is often utilized in transparent microelectrodes. ITO has high conductivity, excellent transparency over the entire visible spectrum due to a large bandgap of around 4 eV, as well as confirmed biocompatibility. Because of numerous advantages of ITO, in this dissertation, ITO as a base material will be applied in both electrophysiological recording and electrochemical sensing. Optogenetics is a revolutionary...
Show moreIndium Tin Oxide (ITO) is a well-known n-type semiconductor material that is often utilized in transparent microelectrodes. ITO has high conductivity, excellent transparency over the entire visible spectrum due to a large bandgap of around 4 eV, as well as confirmed biocompatibility. Because of numerous advantages of ITO, in this dissertation, ITO as a base material will be applied in both electrophysiological recording and electrochemical sensing. Optogenetics is a revolutionary neuromodulation technique that utilizes light to excite or inhibit the activity of genetically targeted neurons, expressing light-sensitive opsin proteins. To fully realize the potential of the optogenetics tools, neural interface devices with both recording and stimulating capabilities are vital for future engineering development, and improving their spatial precision is a topic of constant research. Conventional transparent recording microelectrodes made of a single material, such as ITO, ultrathin metals, graphene, and poly-(3, 4-ethylene dioxythiophene)/poly(styrene sulfonate) (PEDOT:PSS), have limitations and hardly possess the desired combination of broadband transmittance, low electrical resistivity, mechanical flexibility, and biocompatibility. One direction of this dissertation work is to develop multilayered electrophysiological microelectrodes with high transparency, outstanding conductivity, low electrochemical impedance, high charge storage capacity, excellent mechanical properties, and ultra-flexibility. Chapter 1 briefly introduced the background, current challenges, and motivations of this dissertation. Chapter 2 concluded a review of electrical materials for neurophysiology recording implants. Chapter 3 proposed a probe with a combined ITO-PEDOT:PSS electrode configuration by spinning thin PEDOT:PSS films on ITO microelectrodes, for applications in low-impedance neural recordings. The characteristics of the ITO-PEDOT:PSS microelectrodes were analyzed as a preliminary study for the following transparent electrophysiology recording array research. Chapter 4 reported an ultra-flexible, conductive, transparent thin film using a PEDOT:PSS-ITO-Ag-ITO multilayer structure on Parylene C deposited at room temperature. The material characterization demonstrated enhanced conductivity, remarkable and wavelength-tunable transmittance, significantly reduced electrochemical impedance, increased charge storage capacity, good stability, good adhesion, and confirmed mechanical properties of the combined film. Next, Chapter 5 demonstrated two 32-channel transparent μECoG arrays using this PEDOT:PSS-ITO-Ag-ITO multilayered thin film structure on Parylene C. These two μECoG arrays proved to work effectively in vivo for the electrophysiological detection in the living brain tissue. Last but not least, Chapter 6 first discussed the ongoing work to develop a 120-channel high spatial resolution transparent micro-ECoG array. The other subsection of this chapter is to fabricate an ITO-based transparent and miniaturized electrochemical sensor for continuous and quantitative monitoring of the concentrations of copper (Cu) and manganese (Mn) ions in bodies and soil environment by utilizing Differential Pulse Stripping Voltammetry (DPSV).
Show less