Search results
(1  20 of 96)
Pages
 Title
 A containerattachable inertial sensor for realtime hydration tracking
 Creator
 Griffith, Henry
 Date
 2019
 Collection
 Electronic Theses & Dissertations
 Description

The underconsumption of fluid is associated with multiple adverse health outcomes, including reduced cognitive function, obesity, and cancer. To aid individuals in maintaining adequate hydration, numerous sensing architectures for tracking fluid intake have been proposed. Amongst the various approaches considered, containerattachable inertial sensors offer a nonwearable solution capable of estimating aggregate consumption across multiple drinking containers. The research described herein...
Show moreThe underconsumption of fluid is associated with multiple adverse health outcomes, including reduced cognitive function, obesity, and cancer. To aid individuals in maintaining adequate hydration, numerous sensing architectures for tracking fluid intake have been proposed. Amongst the various approaches considered, containerattachable inertial sensors offer a nonwearable solution capable of estimating aggregate consumption across multiple drinking containers. The research described herein demonstrates techniques for improving the performance of these devices.A novel sip detection algorithm designed to accommodate the variable duration and sparse occurrence of drinking events is presented at the beginning of this dissertation. The proposed technique identifies drinks using a twostage segmentation and classification framework. Segmentation is performed using a dynamic partitioning algorithm which spots the characteristic inclination pattern of the container during drinking. Candidate drinks are then distinguished from handling activities with similar motion patterns using a support vector machine classifier. The algorithm is demonstrated to improve true positive detection rate from 75.1% to 98.8% versus a benchmark approach employing static segmentation. Multiple strategies for improving drink volume estimation performance are demonstrated in the latter portion of this dissertation. Proposed techniques are verified through a largescale data collection consisting of 1,908 drinks consumed by 84 individuals over 159 trials. Support vector machine regression models are shown to improve perdrink estimation accuracy versus the prior stateoftheart for a single inertial sensor, with mean absolute percentage error reduced by 11.1%. Aggregate consumption accuracy is also improved versus previously reported results for a containerattachable device.An approach for computing aggregate consumption using fill level estimates is also demonstrated. Fill level estimates are shown to exhibit superior accuracy with reduced intersubject variance versus volume models. A heuristic fusion technique for further improving these estimates is also introduced herein. Heuristic fusion is shown to reduce root mean square error versus direct estimates by over 30%. The dissertation concludes by demonstrating the ability of the sensor to operate across multiple containers.
Show less
 Title
 Effect of pavement structural response on rolling resistance and fuel economy
 Creator
 Balzarini, Danilo
 Date
 2019
 Collection
 Electronic Theses & Dissertations
 Description

The massive use of fuel required by road transportation is accountable for the exploitation of nonrenewable energy sources, is a major source of pollutants emission, and implies high economic costs. Rolling resistance is a factor affecting vehicles energy consumption; the structural rolling resistance (SRR) is the component of rolling resistance that occurs due to the deformation of the pavement structure. The present research presents an investigation on the SRR in order to identify its...
Show moreThe massive use of fuel required by road transportation is accountable for the exploitation of nonrenewable energy sources, is a major source of pollutants emission, and implies high economic costs. Rolling resistance is a factor affecting vehicles energy consumption; the structural rolling resistance (SRR) is the component of rolling resistance that occurs due to the deformation of the pavement structure. The present research presents an investigation on the SRR in order to identify its causes, characterize it and develop the instruments to predict its impact on fuel consumption for different road and traffic conditions.First the methods to calculate the SRR on asphalt and concrete pavements were developed. The structural rolling resistance is calculated as the resistance to motion caused by the uphill slope seen by the tires due to the pavement deformation. The SRR can be converted into fuel consumption using the calorific value of the fuel and the engine efficiency, and the greenhouse gas emissions associated with it can be calculated.Purely mechanistic models were used to determine the structural rolling resistance, and the fuel consumption associated with it, on 17 California pavement sections under different loading and environmental conditions. The results were used to develop simple and rapidtouse mechanistic empirical heuristic models to predict the energy dissipation associated with the structural rolling resistance on any asphalt or concrete pavement.The difference in terms of fuel consumption and pollutants emissions between different pavement structures can be significant and could be included in economic evaluations and life cycle assessment studies. For this purpose, a practical tool was createddeveloped, based on the heuristic models, that allows the calculation of the fuel consumption associated with the SRR for any given traffic and pavement section. Examples of applications of such a tool are presented and discussed.
Show less
 Title
 Recalibration of rigid pavement performance models and development of traffic inputs for PavementME design in Michigan
 Creator
 Musunuru, Gopi Krishna
 Date
 2019
 Collection
 Electronic Theses & Dissertations
 Description

The mechanisticempirical pavement design guide (AASHTOWARE PavementME) incorporates mechanistic models to estimate stresses, strains, and deformations in pavement layers using sitespecific climatic, material, and traffic characteristics. These structural responses are used to predict pavement performance using empirical models (i.e., transfer functions). The transfer functions need to be calibrated to improve the accuracy of the performance predictions, reflecting the unique field...
Show moreThe mechanisticempirical pavement design guide (AASHTOWARE PavementME) incorporates mechanistic models to estimate stresses, strains, and deformations in pavement layers using sitespecific climatic, material, and traffic characteristics. These structural responses are used to predict pavement performance using empirical models (i.e., transfer functions). The transfer functions need to be calibrated to improve the accuracy of the performance predictions, reflecting the unique field conditions and design practices. The existing local calibrations of the performance models were performed by using version 2.0 of the PavementME software. However, AASHTO has released versions 2.2 and 2.3 of the software since the completion of the last study. In the revised versions of the software, several bugs were fixed.Consequently, some performance models were modified in the newer software versions. As a result, the concrete pavement IRI predictions and the resulting PCC slab thicknesses have been impacted. The performance predictions varied significantly from the observed structural and function distresses, and hence, the performance models were recalibrated to enhance the confidence in pavement designs. Linear and nonlinear mixedeffects models were used for calibration to account for the nonindependence among the data measured on the same sections over time. Also, climate data, material properties, and design parameters were used to develop a model for predicting permanent curl for each location to address some limitations of the PavementME. This model can be used at the design stage to estimate permanent curl for a given location in Michigan.PavementME also requires specific types of traffic data to design new or rehabilitated pavement structures. The traffic inputs include monthly adjustment factors (MAF), hourly distribution factors (HDF), vehicle class distributions (VCD), axle groups per vehicle (AGPV), and axle load distributions for different axle configurations. During the last seven years, new traffic data were collected, which reflect the recent economic growth, additional, and downgraded WIM sites. Hence it was appropriate to reevaluate the current traffic inputs and incorporate any changes. Weight and classification data were obtained from 41 WeighinMotion (WIM) sites located throughout the State of Michigan to develop Level 1 (sitespecific) traffic inputs. Cluster analyses were conducted to group sites for the development of Level 2A inputs. Classification models such as decision trees, random forests, and Naive Bayes classifier were developed to assign a new site to these clusters; however, this proved difficult. An alternative simplified method to develop Level 2B inputs by grouping sites with similar attributes was also adopted. The optimal set of attributes for developing these Level 2B inputs were identified by using an algorithm developed in this study. The effects of the developed hierarchical traffic inputs on the predicted performance of rigid and flexible pavements were investigated using the PavementME. Based on the statistical and practical significance of the life differences, appropriate levels were established for each traffic input. The methodology for developing traffic inputs is intuitive and practical for future updates. Also, there is a need to identify the change in traffic patterns to update the traffic inputs so that the pavement sections would not be overdesigned or underdesigned. Models were developed where the shortterm counts from the PTR sites can be used as inputs to check if the new traffic patterns cause any substantial differences in design life predictions.
Show less
 Title
 I. amhb : (anti)aromaticitymodulated hydrogen bonding. ii. evaluation of implicit solvation models for predicting hydrogen bond free energies
 Creator
 Kakeshpour, Tayeb
 Date
 2019
 Collection
 Electronic Theses & Dissertations
 Description

My doctoral research under Professor James E. Jackson focused on hydrogen bonding (Hbonding) using physical organic chemistry tools. In the first chapter, I present how I used quantum chemical simulations, synthetic organic chemistry, NMR spectroscopy, and Xray crystallography to provide robust theoretical and experimental evidence for an interplay between (anti)aromaticity and Hbond strength of heterocycles, a concept that we dubbed (Anti)aromaticityModulated Hydrogen Bonding (AMHB). In...
Show moreMy doctoral research under Professor James E. Jackson focused on hydrogen bonding (Hbonding) using physical organic chemistry tools. In the first chapter, I present how I used quantum chemical simulations, synthetic organic chemistry, NMR spectroscopy, and Xray crystallography to provide robust theoretical and experimental evidence for an interplay between (anti)aromaticity and Hbond strength of heterocycles, a concept that we dubbed (Anti)aromaticityModulated Hydrogen Bonding (AMHB). In the second chapter, I used accurately measured hydrogen bond energies for a range of substrates and solvents to evaluate the performance of implicit solvation models in combination with density functional methods for predicting solution phase hydrogen bond energies. This benchmark study provides useful guidelines for a priori modeling of hydrogen bondingbased designs.Coordinates of the optimized geometries and crystal structures are provided as supplementary materials.
Show less
 Title
 Comparison of methods for detecting violations of measurement invariance with continuous construct indicators using latent variable modeling
 Creator
 Zhang, Mingcai (Graduate of Michigan State University)
 Date
 2020
 Collection
 Electronic Theses & Dissertations
 Description

Measurement invariance (MI) refers to the fact that the measurement instrument measures the same concept in the same way in two or more groups. However, in educational and psychological testing practice, the assumption of MI is often violated due to the contamination by possible noninvariance in the measurement models. In the framework of Latent Variable Modeling (LVM), methodologists have developed different statistical methods to identify the noninvariant components. Among these methods,...
Show moreMeasurement invariance (MI) refers to the fact that the measurement instrument measures the same concept in the same way in two or more groups. However, in educational and psychological testing practice, the assumption of MI is often violated due to the contamination by possible noninvariance in the measurement models. In the framework of Latent Variable Modeling (LVM), methodologists have developed different statistical methods to identify the noninvariant components. Among these methods, the free baseline method (FR) is popularly employed, but this method is limited due to the necessity of choosing a truly invariant reference indicator (RI). Two other methods, namely, the BenjaminiHochberg method (BH) and the alignment method (AM) are exempt from the RI setting. The BH method applies the false discovery rate (FDR) procedure. The AM method aims to optimize the model estimates under the assumption of approximate invariance. The purpose of the present study is to address the problem of RI setting by comparing the BH method and the AM method with the traditional free baseline method through both a simulation study and an empirical data analysis. More specifically, the simulation study is designed to investigate the performances of the three methods through varying the sample sizes and the characteristics of noninvariance embedded in the measurement models. The characteristics of noninvariance are distinguished as the location of noninvariant parameters, the degree of noninvariant parameters, and the magnitude of model noninvariance. The performances of these three methods are also compared on an empirical dataset (Openness for Problem Solving Scale in PISA 2012) that is obtained from three countries (ShanghaiChina, Australia, and the United States).The simulation study finds that the wrong RI choice heavily impacts the FR method, which produces high type I error rates and low statistical power rates. Both the BH method and the AM method perform better than the FR method in this setting. Comparatively speaking, the benefit of the BH method is that it performs the best by achieving high powers for detecting noninvariance. The power rate increases with lowering the magnitude of model noninvariance, and with increasing sample size and degree of noninvariance. The AM method performs the best with respect to type I errors. The type I error rates estimated by the AM method are low under all simulation conditions. In the empirical study, both the BH method and the AM method perform similarly in estimating the invariance/noninvariance patterns among the three country pairs. However, the FR method, for which the RI is the first item by default, recovers a different invariance/noninvariance pattern. The results can help the methodologists gain a better understanding of the potential advantages of the BH method and the AM method over the traditional FR method. The study results also highlight the importance of correctly specifying the model noninvariance at the indicator level. Based on the characteristics of the noninvariant components, practitioners may consider deleting/modifying the noninvariant indicators or free the noninvariant components while building partial invariant models in order to improve the quality of crossgroup comparisons.
Show less
 Title
 Variable selection in varying multiindex coefficient models with applications to geneenvironmental interactions
 Creator
 Guan, Shunjie
 Date
 2017
 Collection
 Electronic Theses & Dissertations
 Description

Variable selection is an important topic in modern statistics literature. And varying multiindex coefficient model(VMICM) is a promising tool to study the synergistic interaction effects between genes and multiple environmental exposures. In this dissertation, we proposed a variable selection approach for VMICM, we also generalized such approach to generalized and quantile regression settings. Their theoretical properties, simulation performance and application in genetic research were...
Show moreVariable selection is an important topic in modern statistics literature. And varying multiindex coefficient model(VMICM) is a promising tool to study the synergistic interaction effects between genes and multiple environmental exposures. In this dissertation, we proposed a variable selection approach for VMICM, we also generalized such approach to generalized and quantile regression settings. Their theoretical properties, simulation performance and application in genetic research were studied.Complicated diseases have both environmental and genetic risk factors, and large amount of research have been devoted to identify geneenvironment (G×E) interaction. Defined as different effect of a genotype on disease risk in persons with different environmental exposures (Ottman (1996)), we can view environmental exposures as the modulating factors in the effect of a gene. Based on this idea, we derived a three stage variable selection approach to estimate different effects of gene variables: varying, constant and zero which respectively correspond to nonlinear G$\times$E effect, no G$\times$E effect and no genetic effect. For multiple environmental exposure variables, we also select and estimate important environmental variables that contribute to the synergistic interaction effect. We theoretically evaluated the oracle property of the three step estimation method. We conducted simulation studies to further evaluate the finite sample performance of the method, considering both continuous and discrete predictors. Application to a real data set demonstrated the utility of the method.In Chapter 3, we generalized such variable selection approach to binary response setting. Instead of minimizing penalized squared error loss, we chose to maximize penalized loglikelihood function. We also theoretically evaluated the oracle property of the proposed selection approach in binary response setting. We demonstrated the performance of the model via simulation. At last, we applied our model to a Type II diabetes data set.Compared to conditional mean regression, conditional quantile regression could provide a more comprehensive understanding of the distribution of the response variable at different quantile. Even if the center of distribution is our only interest, median regression (special case of quantile regression) could offer a more robust estimator. Hence, we extended our three stage variable selection approach to a quantile regression setting in Chapter 4. We demonstrated the finite sample performance of the model via extensive simulation. And we applied our model to a birth weight data set.
Show less
 Title
 Numerical methods for gravity inversion, synthetic aperture radar, and traveltime tomography
 Creator
 Gao, Qinfeng
 Date
 2017
 Collection
 Electronic Theses & Dissertations
 Description

"Inverse problems have many applications. In this thesis, we focus on designing and implementing numerical methods for three inverse problems: gravity inversion, synthetic aperture radar, and traveltime tomography. We present extensive numerical examples to demonstrate that these algorithms are stable and efficient. In Chapter 2, lowrank approximation is incorporated into a local levelset method for gravity inversion. This change helps to reduce the computational time of the mismatch...
Show more"Inverse problems have many applications. In this thesis, we focus on designing and implementing numerical methods for three inverse problems: gravity inversion, synthetic aperture radar, and traveltime tomography. We present extensive numerical examples to demonstrate that these algorithms are stable and efficient. In Chapter 2, lowrank approximation is incorporated into a local levelset method for gravity inversion. This change helps to reduce the computational time of the mismatch gravity force term on the boundary, and reduces the computational complexity from O(N3 ) to O(N2 ) in 2D and from O(N5 ) to O(N4 ) in 3D. Many numerical results show that the locations of unknown objects are accurately captured by this lowrank levelset method. In Chapter 3, both the wave equation and Radon transform are carried out as an approach to the synthetic aperture radar problem. The waveequationbased method includes harmonic extension at terminal time, solving the wave equation backward using a perfectly matched layer, and Neumann iteration. These two methods provide comparable results and help to prove that a curved flight path is no better than a straight one. In Chapter 4, we implement the finite element method as a penalizationregularizationoperator splitting method for traveltime tomography based on the eikonal equation. Both the travel time and slowness are recovered with this algorithm in both 2D and 3D. Finally, Chapter 5 contains our conclusions."Page ii.
Show less
 Title
 Brain connectivity analysis using information theory and statistical signal processing
 Creator
 Wang, Zhe (Software engineer)
 Date
 2017
 Collection
 Electronic Theses & Dissertations
 Description

Connectivity between different brain regions generates our minds. Existing work on brain network analysis has mainly been focused on the characterization of connections between the regions in terms of connectivity and causality. Connectivity measures the dependence between regional brain activities, and causality analysis aims to determine the directionality of information flow among the functionally connected brain regions, and find the relationship between causes and effects.Traditionally,...
Show moreConnectivity between different brain regions generates our minds. Existing work on brain network analysis has mainly been focused on the characterization of connections between the regions in terms of connectivity and causality. Connectivity measures the dependence between regional brain activities, and causality analysis aims to determine the directionality of information flow among the functionally connected brain regions, and find the relationship between causes and effects.Traditionally, the study on connectivity and causality has largely been limited to linear relationships. In this dissertation, as an effort to achieve more accurate characterization of connections between brain regions, we aim to go beyond the linear model, and develop innovative techniques for both nondirectional and directional connectivity analysis. Note that due to variability in the brain connectivity of each individual, the connectivity between two brain regions alone may not be sufficient for brain function analysis, in this research, we also conduct network connectivity pattern analysis, so as to reveal more indepth information.First, we characterize nondirectional connectivity using mutual information (MI). In recent years, MI has gradually appeared as an alternative metric for brain connectivity, since it measures both linear and nonlinear dependence between two brain regions, while the traditional Pearson correlation only measures the linear dependence. We develop an innovative approach to estimate the MI between two functionally connected brain regions and apply it to brain functional magnetic resonance imaging (fMRI) data. It is shown that: on average, cognitively normal subjects show larger mutual information between critical regions than Alzheimer's disease (AD) patients.Second, we develop new methodologies for brain causality analysis based on directed information (DI). Traditionally, brain causality is based on the wellknown Granger Causality (GC) analysis. The validity of GC has been widely recognized. However, it has also been noticed that GC relies heavily on the linear prediction method. When there exists strong nonlinear interactions between two regions, GC analysis may lead to invalid results. In this research, (i) we develop an innovative framework for causality analysis based on directed information (DI), which reflects the information flow from one region to another, and has no modeling constraints on the data. It is shown that DI based causality analysis is effective in capturing both linear and nonlinear causal relationships. (ii) We show the conditional equivalence between the DI Framework and Friston's dynamic causal modeling (DCM), and reveal the relationship between directional information transfer and cognitive state change within the brain. Finally, based on brain network connectivity pattern analysis, we develop a robust method for the AD, mild cognitive impairment (MCI) and normal control (NC) subject classification under size limited fMRI data samples. First, we calculate the Pearson correlation coefficients between all possible ROI pairs in the selected subnetwork and use them to form a feature vector for each subject. Second, we develop a regularized linear discriminant analysis (LDA) approach to reduce the noise effect. The feature vectors are then projected onto a subspace using the proposed regularized LDA, where the differences between AD, MCI and NC subjects are maximized. Finally, a multiclass AdaBoost Classifier is applied to carry out the classification task. Numerical analysis demonstrates that the combination of regularized LDA and the AdaBoost classifier can increase the classification accuracy significantly.
Show less
 Title
 The mathematical models of nutritional plasticity and the bifurcation in a nonlocal diffusion equation
 Creator
 Liang, Yu, Ph. D.
 Date
 2016
 Collection
 Electronic Theses & Dissertations
 Description

The thesis consists of two parts. In the first part, I investigate the developmental mechanisms that regulate the nutritional plasticity of organ sizes in Drosophila melanganster, fruit fly. Here I focus on the insulinlike signalling pathway through which the developmental nutrition is signalled to growing organs. Two mathematical models, an ODE model and a PDE model, are established based on the IIS pathway. In the ODE model, the circulating gene expression of each components in IIS pathway...
Show moreThe thesis consists of two parts. In the first part, I investigate the developmental mechanisms that regulate the nutritional plasticity of organ sizes in Drosophila melanganster, fruit fly. Here I focus on the insulinlike signalling pathway through which the developmental nutrition is signalled to growing organs. Two mathematical models, an ODE model and a PDE model, are established based on the IIS pathway. In the ODE model, the circulating gene expression of each components in IIS pathway is considered as model variables. By analyzing the steady states of the ODE model under different parameter settings, the hypothesis that the difference of the nutritional plasticity among all organs of Drosophila is due to the variation of the total gene expressions of components in IIS pathway is verified. Furthermore, the forkhead transcription factor FOXO, a negative growth regulator that is activated when nutrition and insulin signaling are low is a key factor to maintain organspecific differences in nutritionalplasticity and insulinsensitivity. In the PDE model, I focus more on the molecule structure within each individual cell. The transportation of proteins between nucleus and cell membrane is modelled in the system. In simulations of the PDEs system, the hypothesis that the concentration of FOXO decrease as the concentration of insulin increase is verified.In the second part of the thesis, I study the bifurcation properties of the nonlocal diffusion equation:\[ L_{\epsilon} u + \lambda (u  u^3) = 0. \]where $L_{\epsilon} u$ is an integral defined as \[ L_{\epsilon} u = \int_{0}^{\pi} \epsilon^{3} J( \frac{yx}{\epsilon} ) ( u(y)  u(x) ) dy. \]and $J(x)$ is a nonnegative radially symmetric function with $J(0) > 0$. It is shown that as the scaling parameter $\epsilon$ is small enough the equation has the pitchfork bifurcations at the spectrum of the operator $L_{\epsilon} u$. A concrete example is considered. The bifurcations result is verified in the concrete example by solving the equation with Newton's Method.
Show less
 Title
 Design and simulation of singlecrystal diamond diodes for high voltage, high power and high temperature applications
 Creator
 Suwanmonkha, Nutthamon
 Date
 2016
 Collection
 Electronic Theses & Dissertations
 Description

ABSTRACTDESIGN AND SIMULATION OF SINGLECRYSTAL DIAMOND DIODES FOR HIGH VOLTAGE, HIGH POWER AND HIGH TEMPERATURE APPLICATIONSByNutthamon SuwanmonkhaDiamond has exceptional properties and great potentials for making highpower semiconducting electronic devices that surpass the capabilities of other common semiconductors including silicon. The superior properties of diamond include wide bandgap, high thermal conductivity, large electric breakdown field and fast carrier mobilities. All of these...
Show moreABSTRACTDESIGN AND SIMULATION OF SINGLECRYSTAL DIAMOND DIODES FOR HIGH VOLTAGE, HIGH POWER AND HIGH TEMPERATURE APPLICATIONSByNutthamon SuwanmonkhaDiamond has exceptional properties and great potentials for making highpower semiconducting electronic devices that surpass the capabilities of other common semiconductors including silicon. The superior properties of diamond include wide bandgap, high thermal conductivity, large electric breakdown field and fast carrier mobilities. All of these properties are crucial for a semiconductor that is used to make electronic devices that can operate at high power levels, high voltage and high temperature.Twodimensional semiconductor device simulation software such as Medici assists engineers to design device structures that allow the performance requirements of device applications to be met. Most physical material parameters of the wellknown semiconductors are already compiled and embedded in Medici. However, diamond is not one of them. Material parameters of diamond, which include the models for incomplete ionization, temperatureandimpuritydependent mobility, and impact ionization, are not readily available in software such as Medici. Models and data for diamond semiconductor material have been developed for Medici in the work based on results measured in the research literature and in the experimental work at Michigan State University. After equipping Medici with diamond material parameters, simulations of various diamond diodes including Schottky, PNjunction and merged Schottky/PNjunction diode structures are reported. Diodes are simulated versus changes in doping concentration, drift layer thickness and operating temperature. In particular, the diode performance metrics studied include the breakdown voltage, turnon voltage, and specific onresistance. The goal is to find the designs which yield low power loss and provide high voltage blocking capability. Simulation results are presented that provide insight for the design of diamond diodes using the various diode structures. Results are also reported on the use of field plate structures in the simulations to control the electric field and increase the breakdown voltage.
Show less
 Title
 Multiscale Gaussianbeam method for highfrequency wave propagation and inverse problems
 Creator
 Song, Chao
 Date
 2018
 Collection
 Electronic Theses & Dissertations
 Description

The existence of Gaussian beam solution to hyperbolic PDEs has been known to the pure mathematics community since sometime in the 1960s [3]. It enjoys popularity afterwards due to its ability to resolve the caustics problem and its efficiency [49, 28, 31]. In this thesis, we will focus on the extension of the multiscale Gaussian beam method and its application to seismic wave modeling and inversion. In the first part of thesis, we discuss the application of the multiscale Gaussian beam...
Show moreThe existence of Gaussian beam solution to hyperbolic PDEs has been known to the pure mathematics community since sometime in the 1960s [3]. It enjoys popularity afterwards due to its ability to resolve the caustics problem and its efficiency [49, 28, 31]. In this thesis, we will focus on the extension of the multiscale Gaussian beam method and its application to seismic wave modeling and inversion. In the first part of thesis, we discuss the application of the multiscale Gaussian beam method to the inverse problem. A new multiscale Gaussian beam method is introduced for carrying out trueamplitude prestack migration of acoustic waves. After applying the Born approximation, the migration process is considered as shooting two beams simultaneously from the subsurface point which we want to image. The Multiscale Gaussian Wavepacket transform provides an efficient and accurate way for both decomposing the perturbation field and initializing Gaussian beam solution. Moreover, we can prescribe both the region of imaging and the range of dipping angles by shooting beams from a subsurface point in the region of imaging. We prove the imaging condition equation rigorously and conduct error analysis. Some numerical approximations are derived to improve the efficiency further. Numerical results in the twodimensional space demonstrate the performance of the proposed migration algorithm. In the second part of thesis, we propose a new multiscale Gaussian beam method with reinitialization to solve the elastic wave equation in the high frequency regime with different boundary conditions. A novel multiscale transform is proposed to decompose any arbitrary vectorvalued function to multiple Gaussian wavepackets with various resolution. After the step of initializing, we derive various rules corresponding to different types of reflection cases. To improve the efficiency and accuracy, we develop a new reinitialization strategy based on the stationary phase approximation method to sharpen each single beam ansatz. This is especially useful and necessary in some reflection cases. Numerical examples with various parameters demonstrate the correctness and robustness of the whole method. There are two boundary conditions considered here, the periodic and the Dirichlet boundary condition. In the end, we show that the convergence rate of the proposed multiscale Gaussian beam method follows the convergence rate of the classical Gaussian beam solution.
Show less
 Title
 On minimization of some nonsmooth convex functionals arising in micromagnetics
 Creator
 Gao, Hongli
 Date
 2015
 Collection
 Electronic Theses & Dissertations
 Description

This thesis is motivated by studying the properties of ferromagnetic materials usingthe LandauLifshitz theory of micromagnetics. In this theory the state of a ferromagneticmaterial is described by the magnetization vector m in terms of a total micromagnetic energythat consists of several competing subenergies: exchange energy, anisotropy energy, externalinteraction energy and magnetostatic energy. For large ferromagnetic materials and undersome limiting regimes of the model, the exchange...
Show moreThis thesis is motivated by studying the properties of ferromagnetic materials usingthe LandauLifshitz theory of micromagnetics. In this theory the state of a ferromagneticmaterial is described by the magnetization vector m in terms of a total micromagnetic energythat consists of several competing subenergies: exchange energy, anisotropy energy, externalinteraction energy and magnetostatic energy. For large ferromagnetic materials and undersome limiting regimes of the model, the exchange energy can be negligible and the totalenergy becomes a reduced model. Our investigations focus on the study of such a reducedmodel of LandauLifshitz theory.The primary focus of the thesis includes two parts: the minimization (static) study andthe evolution (dynamic) study. We investigate a new method for the existence of minimizersof the reduced micromagnetic energy based on a duality method. In this method, the reducedmicromagnetic energy is closely related to a convex functional (the dual functional) on thecurlfree vector functions. Our minimization and dynamics studies are based on the studyof the minimization and gradient ow of this dual functional. Much of the thesis is focusedon the minimization problem of two special cases: soft case and uniaxial case on the annulusdomain; in particular, in the soft case, for some range of the parameter, the energy minimizersof the original micromagnetic energy are constructed through the EulerLagrange equationof the dual functional using the characteristics method for a reduced Eikonal type equation.The second direction of our study of this thesis is an attempt to obtain certain reasonabledynamic process for the evolution of m, where the asymptotic behavior of the gradient owof the reduced energy functional is investigated.
Show less
 Title
 Sharp estimates in harmonic analysis
 Creator
 Rey, Guillermo
 Date
 2015
 Collection
 Electronic Theses & Dissertations
 Description

We investigate certain sharp estimates related to singular integrals. In particular we give sharp level set estimates for sparse operators, we show how to reduce the problem of estimating Calder\'onZygmund operators by sparse operators, and we study some weighted inequalities for these operators.
 Title
 Statistical properties of some almost Anosov systems
 Creator
 Zhang, Xu
 Date
 2016
 Collection
 Electronic Theses & Dissertations
 Description

We investigate the polynomial lower and upper bounds for decay of correlations of a class of twodimensional almost Anosov diffeomorphisms with respect to their SinaiRuelleBowen measures (SRB measures), where the almost Anosov diffeomorphism is a system which is hyperbolic everywhere except for one point. At the indifferent fixed point, the Jacobian matrix is an identity matrix. The degrees of the bounds are determined by the expansion and contraction rates as the orbits approach the...
Show moreWe investigate the polynomial lower and upper bounds for decay of correlations of a class of twodimensional almost Anosov diffeomorphisms with respect to their SinaiRuelleBowen measures (SRB measures), where the almost Anosov diffeomorphism is a system which is hyperbolic everywhere except for one point. At the indifferent fixed point, the Jacobian matrix is an identity matrix. The degrees of the bounds are determined by the expansion and contraction rates as the orbits approach the indifferent fixed point, and can be expressed by using coefficients of the third order terms in the Taylor expansions of the diffeomorphisms at the indifferent fixed points.We discuss the relationship between the existence of SRB measures and the differentia bility of some almost Anosov diffeomorphisms near the indifferent fixed points in dimensions bigger than one. The eigenvalue of Jacobian matrix at the indifferent fixed point along the onedimensional contraction subspace is less than one, while the other eigenvalues along the expansion subspaces are equal to one. As a consequence, there are twicedifferentiable al most Anosov diffeomorphisms that admit infinite SRB measures in two or threedimensional spaces; there exist twicedifferentiable almost Anosov diffeomorphisms with SRB measures in dimensions bigger than three. Further, we obtain the polynomial lower and upper bounds for the correlation functions of these almost Anosov maps that admit SRB measures.
Show less
 Title
 Multiscale modeling of polymer nanocomposites
 Creator
 Sheidaei, Azadeh
 Date
 2015
 Collection
 Electronic Theses & Dissertations
 Description

In recent years, polymer nanocomposites (PNCs) have increasingly gained more attention due to their improved mechanical, barrier, thermal, optical, electrical and biodegradable properties in comparison with the conventional microcomposites or pristine polymer. With a modest addition of nanoparticles (usually less than 5wt. %), PNCs offer a wide range of improvements in moduli, strength, heat resistance, biodegradability, as well as decrease in gas permeability and flammability. Although...
Show moreIn recent years, polymer nanocomposites (PNCs) have increasingly gained more attention due to their improved mechanical, barrier, thermal, optical, electrical and biodegradable properties in comparison with the conventional microcomposites or pristine polymer. With a modest addition of nanoparticles (usually less than 5wt. %), PNCs offer a wide range of improvements in moduli, strength, heat resistance, biodegradability, as well as decrease in gas permeability and flammability. Although PNCs offer enormous opportunities to design novel material systems, development of an effective numerical modeling approach to predict their properties based on their complex multiphase and multiscale structure is still at an early stage. Developing a computational framework to predict the mechanical properties of PNC is the focus of this dissertation. A computational framework has been developed to predict mechanical properties of polymer nanocomposites. In chapter 1, a microstructure inspired material model has been developed based on statistical technique and this technique has been used to reconstruct the microstructure of Halloysite nanotube (HNT) polypropylene composite. This technique also has been used to reconstruct exfoliated Graphene nanoplatelet (xGnP) polymer composite. The model was able to successfully predict the material behavior obtained from experiment. Chapter 2 is the summary of the experimental work to support the numerical work. First, different processing techniques to make the polymer nanocomposites have been reviewed. Among them, melt extrusion followed by injection molding was used to manufacture high density polyethylene (HDPE) – xGnP nanocomposties. Scanning electron microscopy (SEM) also was performed to determine particle size and distribution and to examine fracture surfaces. Particle size was measured from these images and has been used for calculating the probability density function for GNPs in chapter 1. A series of nanoindentation tests have been conducted to reveal the spatial variation of the superstructure developed along and across the flow direction of injectionmolded HDPE/GNP.The uniaxial tensile test and shear test have been conducted on HDPE and xGnP/HDPE specimens. The stressstrain curves for HDPE obtained from these experiments have been used in chapter 5 to calibrate the modified Gurson–Tvergaard–Needleman to capture the damage progression in HDPE. In chapter 3, the 3D microstructure model developed in chapter 1 was incorporated in a damage modeling problem in nanocomposite where damage initiation has been modeled using cohesivezone model. There is a significant difference between the properties of inclusion and the host polymer in polymer nanocomposite, which leads to the damage evolution during deformation due to a huge stress concentration between nanofiller and polymer. The finite element model of progressive debonding in nanoreinforced composite has been proposed based on the cohesivezone model of the interface. In order to model cohesivezone, a cohesive zone traction displacement relation is needed. This curve may be obtained either through a fiber pullout experiment or by simulating the test using molecular dynamics. In the case of nanofillers, conducting fiber pullout test is very difficult and result is often not reproducible. In chapter 4, molecular dynamics simulation of polymer nanocomposite has been performed. One of the goals was to extract the loaddisplacement curves of graphene/HDPE pullout test and obtain cohesive zone parameters in chapter 3. Finally, in chapter 5, a damage model of HDPE/GNP nanocomposite has been developed based on matrix cracking and fiber debonding. This 3D microstructure model was incorporated in a damage modeling problem in nanocomposite where damage initiation and damage progression have been modeled using cohesivezone and modified GursonTvergaardNeedleman (GTN) material models.
Show less
 Title
 Kernel methods for biosensing applications
 Creator
 Khan, Hassan Aqeel
 Date
 2015
 Collection
 Electronic Theses & Dissertations
 Description

This thesis examines the design noise robust information retrieval techniques basedon kernel methods. Algorithms are presented for two biosensing applications: (1)High throughput protein arrays and (2) Noninvasive respiratory signal estimation.Our primary objective in protein array design is to maximize the throughput byenabling detection of an extremely large number of protein targets while using aminimal number of receptor spots. This is accomplished by viewing the proteinarray as a...
Show moreThis thesis examines the design noise robust information retrieval techniques basedon kernel methods. Algorithms are presented for two biosensing applications: (1)High throughput protein arrays and (2) Noninvasive respiratory signal estimation.Our primary objective in protein array design is to maximize the throughput byenabling detection of an extremely large number of protein targets while using aminimal number of receptor spots. This is accomplished by viewing the proteinarray as a communication channel and evaluating its information transmission capacity as a function of its receptor probes. In this framework, the channel capacitycan be used as a tool to optimize probe design; the optimal probes being the onesthat maximize capacity. The information capacity is first evaluated for a small scaleprotein array, with only a few protein targets. We believe this is the first effort toevaluate the capacity of a protein array channel. For this purpose models of theproteomic channel's noise characteristics and receptor nonidealities, based on experimental prototypes, are constructed. Kernel methods are employed to extend thecapacity evaluation to larger sized protein arrays that can potentially have thousandsof distinct protein targets. A specially designed kernel which we call the ProteomicKernel is also proposed. This kernel incorporates knowledge about the biophysicsof target and receptor interactions into the cost function employed for evaluation of channel capacity.For respiratory estimation this thesis investigates estimation of breathingrateand lungvolume using multiple noninvasive sensors under motion artifact and highnoise conditions. A spirometer signal is used as the gold standard for evaluation oferrors. A novel algorithm called the segregated envelope and carrier (SEC) estimation is proposed. This algorithm approximates the spirometer signal by an amplitudemodulated signal and segregates the estimation of the frequency and amplitude information. Results demonstrate that this approach enables effective estimation ofboth breathing rate and lung volume. An adaptive algorithm based on a combination of Gini kernel machines and wavelet filltering is also proposed. This algorithm is titledthe waveletadaptive Gini (or WAGini) algorithm, it employs a novel wavelet transform based feature extraction frontend to classify the subject's underlying respiratorystate. This information is then employed to select the parameters of the adaptive kernel machine based on the subject's respiratory state. Results demonstrate significantimprovement in breathing rate estimation when compared to traditional respiratoryestimation techniques.
Show less
 Title
 Optimal design problems in thinfilm and diffractive optics
 Creator
 Wang, Yuliang
 Date
 2013
 Collection
 Electronic Theses & Dissertations
 Description

Optical components built from thinfilm layered structures are technologically very important. Applications include but are not limited to energy conversion and conservation, data transmission and conversion, space technology, imaging and so on. In practice these structures are defined by various parameters such as the refractiveindex profile, the layer thickness and the period. The problem to find the combination of parameters which yield the spectral response closest to a given target...
Show moreOptical components built from thinfilm layered structures are technologically very important. Applications include but are not limited to energy conversion and conservation, data transmission and conversion, space technology, imaging and so on. In practice these structures are defined by various parameters such as the refractiveindex profile, the layer thickness and the period. The problem to find the combination of parameters which yield the spectral response closest to a given target function is referred to as optimal design. This dissertation considers several topics in the mathematical modeling and optimal design of these structures through numerical optimization. A key step in numerical optimization is to define an objective function the measures the discrepancy between the target performance and that of the current solution. Our first topic is the impact of the objective function, its metric in particular, on the optimal solution and its performance. This is done by numerical experiments with different types of antireflection coatings using twomaterial multilayers. The results confirm existing statements and provide a few new findings, e.g. some specific metrics can yield particularly better solutions than others. Rugates are optical coatings presenting continuous refractiveindex profiles. They have received much attention recently due to technological advances and their potential better optical performance and environmental properties. The Fourier transform method is a widely used technique for the design of rugates. However, it is based on approximate expressions with strict assumptions and has many practical limitations. Our second topic is the optimal design of rugates through numerical optimization of objective functions with penalty terms. We found solutions with similar performance and novel solutions by using different metrics in the penalty term. Existing methods used only local basis functions such as piecewise constant or linear functions for the discretization of the refractiveindex profile. Our third topic is the use global basis functions such as sinusoidal functions in the discretization. A simple transformation is used to overcome the difficulty of bound constraints and the result is very promising. Both multilayer and rugate coatings can be obtained using this method. Diffraction gratings are thinfilm structures whose optical properties vary periodically along one or two directions. Our final topic is the optimal design of such structures in the broadband case. The objective functions and their gradient are obtained by solving variational problems and their adjoints with finite element method. Interesting phenomena are observed in the numerical experiments. Limitations and future work in this direction are pointed out.
Show less
 Title
 Structure and evolutionary dynamics in fitness landscapes
 Creator
 Pakanati, Anuraag R.
 Date
 2015
 Collection
 Electronic Theses & Dissertations
 Description

Evolution can be conceptualized as an optimization algorithm that allows populations to search through genotypes for those that produce high fitness solutions. This search process is commonly depicted as exploring a “fitness landscape”, which combines similarity relationships among genotypes with the concept of a genotypefitness map. As populations adapt to their fitness landscape, they accumulate information about the fitness landscape in which they live. A greater understanding of...
Show moreEvolution can be conceptualized as an optimization algorithm that allows populations to search through genotypes for those that produce high fitness solutions. This search process is commonly depicted as exploring a “fitness landscape”, which combines similarity relationships among genotypes with the concept of a genotypefitness map. As populations adapt to their fitness landscape, they accumulate information about the fitness landscape in which they live. A greater understanding of evolution on fitness landscapes will help elucidate fundamental evolutionary processes. I examine methods of estimating information acquisition in evolving populations and find that these techniques have largely ignored the effects of common descent. Since information is estimated by measuring conserved genomic regions across a population, common descent can create a severe bias by increasing similarities among unselected regions. I introduce a correction method to compensate for the effects of common descent on genomic information and empirically demonstrate its efficacy.Next, I explore three instantiations of NK, Avida, and RNA fitness landscapes to better understand structural properties such as the distribution of peaks and the size of basins of attraction. I find that the fitness of peaks is correlated with the fitness of peaks within their neighborhood, and that the size of peaks' basins of attraction tends to be proportional to the heights of the peaks. Finally, I visualize local dynamics and perform a detailed comparison between the space of what evolutionary trajectories are technically possible from a single starting point and the results of actual evolving populations.
Show less
 Title
 Quantum information theory of measurement
 Creator
 Glick, Jennifer Ranae
 Date
 2017
 Collection
 Electronic Theses & Dissertations
 Description

Quantum measurement lies at the heart of quantum information processing and is one of the criteria for quantum computation. Despite its central role, there remains a need for a robust quantum informationtheoretical description of measurement. In this work, I will quantify how information is processed in a quantum measurement by framing it in quantum informationtheoretic terms. I will consider a diverse set of measurement scenarios, including weak and strong measurements, and parallel and...
Show moreQuantum measurement lies at the heart of quantum information processing and is one of the criteria for quantum computation. Despite its central role, there remains a need for a robust quantum informationtheoretical description of measurement. In this work, I will quantify how information is processed in a quantum measurement by framing it in quantum informationtheoretic terms. I will consider a diverse set of measurement scenarios, including weak and strong measurements, and parallel and consecutive measurements. In each case, I will perform a comprehensive analysis of the role of entanglement and entropy in the measurement process and track the flow of information through all subsystems. In particular, I will discuss how weak and strong measurements are fundamentally of the same nature and show that weak values can be computed exactly for certain measurements with an arbitrary interaction strength. In the context of the Bellstate quantum eraser, I will derive a tradeoff between the coherence and "whichpath" information of an entangled pair of photons and show that a quantum informationtheoretic approach yields additional insights into the origins of complementarity. I will consider two types of quantum measurements: those that are made within a closed system where every part of the measurement device, the ancilla, remains under control (what I will call unamplified measurements), and those performed within an open system where some degrees of freedom are traced over (amplified measurements). For sequences of measurements of the same quantum system, I will show that information about the quantum state is encoded in the measurement chain and that some of this information is "lost" when the measurements are amplifiedthe ancillae become equivalent to a quantum Markov chain. Finally, using the coherent structure of unamplified measurements, I will outline a protocol for generating remote entanglement, an essential resource for quantum teleportation and quantum cryptographic tasks.
Show less
 Title
 Modeling emerging solar cell materials and devices
 Creator
 Thongprong, Non
 Date
 2017
 Collection
 Electronic Theses & Dissertations
 Description

Organic photovoltaics (OPVs) and perovskite solar cells are emerging classes of solar cell that are promising for clean energy alternatives to fossil fuels. Understanding fundamental physics of these materials is crucial for improving their energy conversion efficiencies and promoting them to practical applications.Current densityvoltage (JV) curves; which are important indicators of OPV efficiency, have direct connections to many fundamental properties of solar cells. They can be described...
Show moreOrganic photovoltaics (OPVs) and perovskite solar cells are emerging classes of solar cell that are promising for clean energy alternatives to fossil fuels. Understanding fundamental physics of these materials is crucial for improving their energy conversion efficiencies and promoting them to practical applications.Current densityvoltage (JV) curves; which are important indicators of OPV efficiency, have direct connections to many fundamental properties of solar cells. They can be described by the Shockley diode equation, resulting in fitting parameters; series and parallel resistance (Rs and Rp), diode saturation current (J0) and ideality factor (n). However, the Shockley equation was developed specifically for inorganic pn junction diodes, so it lacks physical meanings when it is applied to OPVs. Hence, the purposes of this work are to understand the fundamental physics of OPVs and to develop new diode equations in the same form as the Shockley equation that are based on OPV physics.We develop a numerical driftdiffusion simulation model to study bilayer OPVs, which will be called the driftdiffusion for bilayer interface (DDBI) model. The model solves Poisson, driftdiffusion and currentcontinuity equations selfconsistently for charge densities and potential profiles of a bilayer device with an organic heterojunction interface described by the GWWF model. We also derive new diode equations that have JV curves consistent with the DDBI model and thus will be called selfconsistent diode (SCD) equations.Using the DDBI and the SCD model allows us to understand working principles of bilayer OPVs and physical definitions of the Shockley parameters. Due to low carrier mobilities in OPVs, space charge accumulation is common especially near the interface and electrodes. Hence, quasiFermi levels (i.e. chemical potentials), which depend on charge densities, are modified around the interface, resulting in a splitting of quasiFermi levels that works as a driving potential for the heterojunction diode. This brings about the meaning of Rs as the resistance that gives rise to the diode voltage equal to the interface quasiFermi level splitting instead of the voltage between the electrodes. QuasiFermi levels that drop near the electrodes because of unmatched electrode work functions or due to charge injection can also increase Rs. Furthermore, we are able to study dissociation and recombination rates of bound charge pairs across the interface (i.e. polaron pairs or PPs) and arrive at the physical meaning of Rp as recombination resistance of PPs. In the dark, PP density is very low, so Rp is possibly caused by a tunneling leakage current at the interface. Ideality factors areparameters that depend on the split of quasiFermi levels and the ratio of recombination rate to recombination rate at equilibrium. Even though they are related to trap characteristics as normally understood, their relations are complicated and careful interpretations of fitted ideality factors are needed. Our models are successfully applied to actual devices, and useful physics can be deduced, for example differences between the Shockley parameters under dark and illumination conditions.Another purpose of this thesis is to study electronic properties of CsSnBr3 perovskite and processes of growing the perovskite film using an epitaxy technique. Calculation results using density functional theory reveal that a CsSnBr3 film that is grown on a NaCl(100) substrate can undergo a phase transition to CsSn2Br5, which is a widebandgap semiconductor material. Actual mechanisms of the transition and the interface between CsSnBr3 and CsSn2Br5are interesting for future studies.
Show less