You are here
Search results
(1 - 20 of 126)
Pages
- Title
- A container-attachable inertial sensor for real-time hydration tracking
- Creator
- Griffith, Henry
- Date
- 2019
- Collection
- Electronic Theses & Dissertations
- Description
-
The underconsumption of fluid is associated with multiple adverse health outcomes, including reduced cognitive function, obesity, and cancer. To aid individuals in maintaining adequate hydration, numerous sensing architectures for tracking fluid intake have been proposed. Amongst the various approaches considered, container-attachable inertial sensors offer a non-wearable solution capable of estimating aggregate consumption across multiple drinking containers. The research described herein...
Show moreThe underconsumption of fluid is associated with multiple adverse health outcomes, including reduced cognitive function, obesity, and cancer. To aid individuals in maintaining adequate hydration, numerous sensing architectures for tracking fluid intake have been proposed. Amongst the various approaches considered, container-attachable inertial sensors offer a non-wearable solution capable of estimating aggregate consumption across multiple drinking containers. The research described herein demonstrates techniques for improving the performance of these devices.A novel sip detection algorithm designed to accommodate the variable duration and sparse occurrence of drinking events is presented at the beginning of this dissertation. The proposed technique identifies drinks using a two-stage segmentation and classification framework. Segmentation is performed using a dynamic partitioning algorithm which spots the characteristic inclination pattern of the container during drinking. Candidate drinks are then distinguished from handling activities with similar motion patterns using a support vector machine classifier. The algorithm is demonstrated to improve true positive detection rate from 75.1% to 98.8% versus a benchmark approach employing static segmentation. Multiple strategies for improving drink volume estimation performance are demonstrated in the latter portion of this dissertation. Proposed techniques are verified through a large-scale data collection consisting of 1,908 drinks consumed by 84 individuals over 159 trials. Support vector machine regression models are shown to improve per-drink estimation accuracy versus the prior state-of-the-art for a single inertial sensor, with mean absolute percentage error reduced by 11.1%. Aggregate consumption accuracy is also improved versus previously reported results for a container-attachable device.An approach for computing aggregate consumption using fill level estimates is also demonstrated. Fill level estimates are shown to exhibit superior accuracy with reduced inter-subject variance versus volume models. A heuristic fusion technique for further improving these estimates is also introduced herein. Heuristic fusion is shown to reduce root mean square error versus direct estimates by over 30%. The dissertation concludes by demonstrating the ability of the sensor to operate across multiple containers.
Show less
- Title
- Effect of pavement structural response on rolling resistance and fuel economy
- Creator
- Balzarini, Danilo
- Date
- 2019
- Collection
- Electronic Theses & Dissertations
- Description
-
The massive use of fuel required by road transportation is accountable for the exploitation of non-renewable energy sources, is a major source of pollutants emission, and implies high economic costs. Rolling resistance is a factor affecting vehicles energy consumption; the structural rolling resistance (SRR) is the component of rolling resistance that occurs due to the deformation of the pavement structure. The present research presents an investigation on the SRR in order to identify its...
Show moreThe massive use of fuel required by road transportation is accountable for the exploitation of non-renewable energy sources, is a major source of pollutants emission, and implies high economic costs. Rolling resistance is a factor affecting vehicles energy consumption; the structural rolling resistance (SRR) is the component of rolling resistance that occurs due to the deformation of the pavement structure. The present research presents an investigation on the SRR in order to identify its causes, characterize it and develop the instruments to predict its impact on fuel consumption for different road and traffic conditions.First the methods to calculate the SRR on asphalt and concrete pavements were developed. The structural rolling resistance is calculated as the resistance to motion caused by the uphill slope seen by the tires due to the pavement deformation. The SRR can be converted into fuel consumption using the calorific value of the fuel and the engine efficiency, and the greenhouse gas emissions associated with it can be calculated.Purely mechanistic models were used to determine the structural rolling resistance, and the fuel consumption associated with it, on 17 California pavement sections under different loading and environmental conditions. The results were used to develop simple and rapid-to-use mechanistic empirical heuristic models to predict the energy dissipation associated with the structural rolling resistance on any asphalt or concrete pavement.The difference in terms of fuel consumption and pollutants emissions between different pavement structures can be significant and could be included in economic evaluations and life cycle assessment studies. For this purpose, a practical tool was createddeveloped, based on the heuristic models, that allows the calculation of the fuel consumption associated with the SRR for any given traffic and pavement section. Examples of applications of such a tool are presented and discussed.
Show less
- Title
- Re-calibration of rigid pavement performance models and development of traffic inputs for Pavement-ME design in Michigan
- Creator
- Musunuru, Gopi Krishna
- Date
- 2019
- Collection
- Electronic Theses & Dissertations
- Description
-
The mechanistic-empirical pavement design guide (AASHTOWARE Pavement-ME) incorporates mechanistic models to estimate stresses, strains, and deformations in pavement layers using site-specific climatic, material, and traffic characteristics. These structural responses are used to predict pavement performance using empirical models (i.e., transfer functions). The transfer functions need to be calibrated to improve the accuracy of the performance predictions, reflecting the unique field...
Show moreThe mechanistic-empirical pavement design guide (AASHTOWARE Pavement-ME) incorporates mechanistic models to estimate stresses, strains, and deformations in pavement layers using site-specific climatic, material, and traffic characteristics. These structural responses are used to predict pavement performance using empirical models (i.e., transfer functions). The transfer functions need to be calibrated to improve the accuracy of the performance predictions, reflecting the unique field conditions and design practices. The existing local calibrations of the performance models were performed by using version 2.0 of the Pavement-ME software. However, AASHTO has released versions 2.2 and 2.3 of the software since the completion of the last study. In the revised versions of the software, several bugs were fixed.Consequently, some performance models were modified in the newer software versions. As a result, the concrete pavement IRI predictions and the resulting PCC slab thicknesses have been impacted. The performance predictions varied significantly from the observed structural and function distresses, and hence, the performance models were recalibrated to enhance the confidence in pavement designs. Linear and nonlinear mixed-effects models were used for calibration to account for the non-independence among the data measured on the same sections over time. Also, climate data, material properties, and design parameters were used to develop a model for predicting permanent curl for each location to address some limitations of the Pavement-ME. This model can be used at the design stage to estimate permanent curl for a given location in Michigan.Pavement-ME also requires specific types of traffic data to design new or rehabilitated pavement structures. The traffic inputs include monthly adjustment factors (MAF), hourly distribution factors (HDF), vehicle class distributions (VCD), axle groups per vehicle (AGPV), and axle load distributions for different axle configurations. During the last seven years, new traffic data were collected, which reflect the recent economic growth, additional, and downgraded WIM sites. Hence it was appropriate to re-evaluate the current traffic inputs and incorporate any changes. Weight and classification data were obtained from 41 Weigh-in-Motion (WIM) sites located throughout the State of Michigan to develop Level 1 (site-specific) traffic inputs. Cluster analyses were conducted to group sites for the development of Level 2A inputs. Classification models such as decision trees, random forests, and Naive Bayes classifier were developed to assign a new site to these clusters; however, this proved difficult. An alternative simplified method to develop Level 2B inputs by grouping sites with similar attributes was also adopted. The optimal set of attributes for developing these Level 2B inputs were identified by using an algorithm developed in this study. The effects of the developed hierarchical traffic inputs on the predicted performance of rigid and flexible pavements were investigated using the Pavement-ME. Based on the statistical and practical significance of the life differences, appropriate levels were established for each traffic input. The methodology for developing traffic inputs is intuitive and practical for future updates. Also, there is a need to identify the change in traffic patterns to update the traffic inputs so that the pavement sections would not be overdesigned or under-designed. Models were developed where the short-term counts from the PTR sites can be used as inputs to check if the new traffic patterns cause any substantial differences in design life predictions.
Show less
- Title
- I. amhb : (anti)aromaticity-modulated hydrogen bonding. ii. evaluation of implicit solvation models for predicting hydrogen bond free energies
- Creator
- Kakeshpour, Tayeb
- Date
- 2019
- Collection
- Electronic Theses & Dissertations
- Description
-
My doctoral research under Professor James E. Jackson focused on hydrogen bonding (H-bonding) using physical organic chemistry tools. In the first chapter, I present how I used quantum chemical simulations, synthetic organic chemistry, NMR spectroscopy, and X-ray crystallography to provide robust theoretical and experimental evidence for an interplay between (anti)aromaticity and H-bond strength of heterocycles, a concept that we dubbed (Anti)aromaticity-Modulated Hydrogen Bonding (AMHB). In...
Show moreMy doctoral research under Professor James E. Jackson focused on hydrogen bonding (H-bonding) using physical organic chemistry tools. In the first chapter, I present how I used quantum chemical simulations, synthetic organic chemistry, NMR spectroscopy, and X-ray crystallography to provide robust theoretical and experimental evidence for an interplay between (anti)aromaticity and H-bond strength of heterocycles, a concept that we dubbed (Anti)aromaticity-Modulated Hydrogen Bonding (AMHB). In the second chapter, I used accurately measured hydrogen bond energies for a range of substrates and solvents to evaluate the performance of implicit solvation models in combination with density functional methods for predicting solution phase hydrogen bond energies. This benchmark study provides useful guidelines for a priori modeling of hydrogen bonding-based designs.Coordinates of the optimized geometries and crystal structures are provided as supplementary materials.
Show less
- Title
- Comparison of methods for detecting violations of measurement invariance with continuous construct indicators using latent variable modeling
- Creator
- Zhang, Mingcai (Graduate of Michigan State University)
- Date
- 2020
- Collection
- Electronic Theses & Dissertations
- Description
-
Measurement invariance (MI) refers to the fact that the measurement instrument measures the same concept in the same way in two or more groups. However, in educational and psychological testing practice, the assumption of MI is often violated due to the contamination by possible noninvariance in the measurement models. In the framework of Latent Variable Modeling (LVM), methodologists have developed different statistical methods to identify the noninvariant components. Among these methods,...
Show moreMeasurement invariance (MI) refers to the fact that the measurement instrument measures the same concept in the same way in two or more groups. However, in educational and psychological testing practice, the assumption of MI is often violated due to the contamination by possible noninvariance in the measurement models. In the framework of Latent Variable Modeling (LVM), methodologists have developed different statistical methods to identify the noninvariant components. Among these methods, the free baseline method (FR) is popularly employed, but this method is limited due to the necessity of choosing a truly invariant reference indicator (RI). Two other methods, namely, the Benjamini-Hochberg method (B-H) and the alignment method (AM) are exempt from the RI setting. The B-H method applies the false discovery rate (FDR) procedure. The AM method aims to optimize the model estimates under the assumption of approximate invariance. The purpose of the present study is to address the problem of RI setting by comparing the B-H method and the AM method with the traditional free baseline method through both a simulation study and an empirical data analysis. More specifically, the simulation study is designed to investigate the performances of the three methods through varying the sample sizes and the characteristics of noninvariance embedded in the measurement models. The characteristics of noninvariance are distinguished as the location of noninvariant parameters, the degree of noninvariant parameters, and the magnitude of model noninvariance. The performances of these three methods are also compared on an empirical dataset (Openness for Problem Solving Scale in PISA 2012) that is obtained from three countries (Shanghai-China, Australia, and the United States).The simulation study finds that the wrong RI choice heavily impacts the FR method, which produces high type I error rates and low statistical power rates. Both the B-H method and the AM method perform better than the FR method in this setting. Comparatively speaking, the benefit of the B-H method is that it performs the best by achieving high powers for detecting noninvariance. The power rate increases with lowering the magnitude of model noninvariance, and with increasing sample size and degree of noninvariance. The AM method performs the best with respect to type I errors. The type I error rates estimated by the AM method are low under all simulation conditions. In the empirical study, both the B-H method and the AM method perform similarly in estimating the invariance/noninvariance patterns among the three country pairs. However, the FR method, for which the RI is the first item by default, recovers a different invariance/noninvariance pattern. The results can help the methodologists gain a better understanding of the potential advantages of the B-H method and the AM method over the traditional FR method. The study results also highlight the importance of correctly specifying the model noninvariance at the indicator level. Based on the characteristics of the noninvariant components, practitioners may consider deleting/modifying the noninvariant indicators or free the noninvariant components while building partial invariant models in order to improve the quality of cross-group comparisons.
Show less
- Title
- Variable selection in varying multi-index coefficient models with applications to gene-environmental interactions
- Creator
- Guan, Shunjie
- Date
- 2017
- Collection
- Electronic Theses & Dissertations
- Description
-
Variable selection is an important topic in modern statistics literature. And varying multi-index coefficient model(VMICM) is a promising tool to study the synergistic interaction effects between genes and multiple environmental exposures. In this dissertation, we proposed a variable selection approach for VMICM, we also generalized such approach to generalized and quantile regression settings. Their theoretical properties, simulation performance and application in genetic research were...
Show moreVariable selection is an important topic in modern statistics literature. And varying multi-index coefficient model(VMICM) is a promising tool to study the synergistic interaction effects between genes and multiple environmental exposures. In this dissertation, we proposed a variable selection approach for VMICM, we also generalized such approach to generalized and quantile regression settings. Their theoretical properties, simulation performance and application in genetic research were studied.Complicated diseases have both environmental and genetic risk factors, and large amount of research have been devoted to identify gene-environment (G×E) interaction. Defined as different effect of a genotype on disease risk in persons with different environmental exposures (Ottman (1996)), we can view environmental exposures as the modulating factors in the effect of a gene. Based on this idea, we derived a three stage variable selection approach to estimate different effects of gene variables: varying, constant and zero which respectively correspond to nonlinear G$\times$E effect, no G$\times$E effect and no genetic effect. For multiple environmental exposure variables, we also select and estimate important environmental variables that contribute to the synergistic interaction effect. We theoretically evaluated the oracle property of the three step estimation method. We conducted simulation studies to further evaluate the finite sample performance of the method, considering both continuous and discrete predictors. Application to a real data set demonstrated the utility of the method.In Chapter 3, we generalized such variable selection approach to binary response setting. Instead of minimizing penalized squared error loss, we chose to maximize penalized log-likelihood function. We also theoretically evaluated the oracle property of the proposed selection approach in binary response setting. We demonstrated the performance of the model via simulation. At last, we applied our model to a Type II diabetes data set.Compared to conditional mean regression, conditional quantile regression could provide a more comprehensive understanding of the distribution of the response variable at different quantile. Even if the center of distribution is our only interest, median regression (special case of quantile regression) could offer a more robust estimator. Hence, we extended our three stage variable selection approach to a quantile regression setting in Chapter 4. We demonstrated the finite sample performance of the model via extensive simulation. And we applied our model to a birth weight data set.
Show less
- Title
- Numerical methods for gravity inversion, synthetic aperture radar, and travel-time tomography
- Creator
- Gao, Qinfeng
- Date
- 2017
- Collection
- Electronic Theses & Dissertations
- Description
-
"Inverse problems have many applications. In this thesis, we focus on designing and implementing numerical methods for three inverse problems: gravity inversion, synthetic aperture radar, and travel-time tomography. We present extensive numerical examples to demonstrate that these algorithms are stable and efficient. In Chapter 2, low-rank approximation is incorporated into a local level-set method for gravity inversion. This change helps to reduce the computational time of the mismatch...
Show more"Inverse problems have many applications. In this thesis, we focus on designing and implementing numerical methods for three inverse problems: gravity inversion, synthetic aperture radar, and travel-time tomography. We present extensive numerical examples to demonstrate that these algorithms are stable and efficient. In Chapter 2, low-rank approximation is incorporated into a local level-set method for gravity inversion. This change helps to reduce the computational time of the mismatch gravity force term on the boundary, and reduces the computational complexity from O(N3 ) to O(N2 ) in 2D and from O(N5 ) to O(N4 ) in 3D. Many numerical results show that the locations of unknown objects are accurately captured by this low-rank level-set method. In Chapter 3, both the wave equation and Radon transform are carried out as an approach to the synthetic aperture radar problem. The wave-equation-based method includes harmonic extension at terminal time, solving the wave equation backward using a perfectly matched layer, and Neumann iteration. These two methods provide comparable results and help to prove that a curved flight path is no better than a straight one. In Chapter 4, we implement the finite element method as a penalization-regularization-operator splitting method for travel-time tomography based on the eikonal equation. Both the travel time and slowness are recovered with this algorithm in both 2D and 3D. Finally, Chapter 5 contains our conclusions."--Page ii.
Show less
- Title
- Brain connectivity analysis using information theory and statistical signal processing
- Creator
- Wang, Zhe (Software engineer)
- Date
- 2017
- Collection
- Electronic Theses & Dissertations
- Description
-
Connectivity between different brain regions generates our minds. Existing work on brain network analysis has mainly been focused on the characterization of connections between the regions in terms of connectivity and causality. Connectivity measures the dependence between regional brain activities, and causality analysis aims to determine the directionality of information flow among the functionally connected brain regions, and find the relationship between causes and effects.Traditionally,...
Show moreConnectivity between different brain regions generates our minds. Existing work on brain network analysis has mainly been focused on the characterization of connections between the regions in terms of connectivity and causality. Connectivity measures the dependence between regional brain activities, and causality analysis aims to determine the directionality of information flow among the functionally connected brain regions, and find the relationship between causes and effects.Traditionally, the study on connectivity and causality has largely been limited to linear relationships. In this dissertation, as an effort to achieve more accurate characterization of connections between brain regions, we aim to go beyond the linear model, and develop innovative techniques for both non-directional and directional connectivity analysis. Note that due to variability in the brain connectivity of each individual, the connectivity between two brain regions alone may not be sufficient for brain function analysis, in this research, we also conduct network connectivity pattern analysis, so as to reveal more in-depth information.First, we characterize non-directional connectivity using mutual information (MI). In recent years, MI has gradually appeared as an alternative metric for brain connectivity, since it measures both linear and non-linear dependence between two brain regions, while the traditional Pearson correlation only measures the linear dependence. We develop an innovative approach to estimate the MI between two functionally connected brain regions and apply it to brain functional magnetic resonance imaging (fMRI) data. It is shown that: on average, cognitively normal subjects show larger mutual information between critical regions than Alzheimer's disease (AD) patients.Second, we develop new methodologies for brain causality analysis based on directed information (DI). Traditionally, brain causality is based on the well-known Granger Causality (GC) analysis. The validity of GC has been widely recognized. However, it has also been noticed that GC relies heavily on the linear prediction method. When there exists strong nonlinear interactions between two regions, GC analysis may lead to invalid results. In this research, (i) we develop an innovative framework for causality analysis based on directed information (DI), which reflects the information flow from one region to another, and has no modeling constraints on the data. It is shown that DI based causality analysis is effective in capturing both linear and non-linear causal relationships. (ii) We show the conditional equivalence between the DI Framework and Friston's dynamic causal modeling (DCM), and reveal the relationship between directional information transfer and cognitive state change within the brain. Finally, based on brain network connectivity pattern analysis, we develop a robust method for the AD, mild cognitive impairment (MCI) and normal control (NC) subject classification under size limited fMRI data samples. First, we calculate the Pearson correlation coefficients between all possible ROI pairs in the selected sub-network and use them to form a feature vector for each subject. Second, we develop a regularized linear discriminant analysis (LDA) approach to reduce the noise effect. The feature vectors are then projected onto a subspace using the proposed regularized LDA, where the differences between AD, MCI and NC subjects are maximized. Finally, a multi-class AdaBoost Classifier is applied to carry out the classification task. Numerical analysis demonstrates that the combination of regularized LDA and the AdaBoost classifier can increase the classification accuracy significantly.
Show less
- Title
- The mathematical models of nutritional plasticity and the bifurcation in a nonlocal diffusion equation
- Creator
- Liang, Yu, Ph. D.
- Date
- 2016
- Collection
- Electronic Theses & Dissertations
- Description
-
The thesis consists of two parts. In the first part, I investigate the developmental mechanisms that regulate the nutritional plasticity of organ sizes in Drosophila melanganster, fruit fly. Here I focus on the insulin-like signalling pathway through which the developmental nutrition is signalled to growing organs. Two mathematical models, an ODE model and a PDE model, are established based on the IIS pathway. In the ODE model, the circulating gene expression of each components in IIS pathway...
Show moreThe thesis consists of two parts. In the first part, I investigate the developmental mechanisms that regulate the nutritional plasticity of organ sizes in Drosophila melanganster, fruit fly. Here I focus on the insulin-like signalling pathway through which the developmental nutrition is signalled to growing organs. Two mathematical models, an ODE model and a PDE model, are established based on the IIS pathway. In the ODE model, the circulating gene expression of each components in IIS pathway is considered as model variables. By analyzing the steady states of the ODE model under different parameter settings, the hypothesis that the difference of the nutritional plasticity among all organs of Drosophila is due to the variation of the total gene expressions of components in IIS pathway is verified. Furthermore, the forkhead transcription factor FOXO, a negative growth regulator that is activated when nutrition and insulin signaling are low is a key factor to maintain organ-specific differences in nutritional-plasticity and insulin-sensitivity. In the PDE model, I focus more on the molecule structure within each individual cell. The transportation of proteins between nucleus and cell membrane is modelled in the system. In simulations of the PDEs system, the hypothesis that the concentration of FOXO decrease as the concentration of insulin increase is verified.In the second part of the thesis, I study the bifurcation properties of the nonlocal diffusion equation:\[ L_{\epsilon} u + \lambda (u - u^3) = 0. \]where $L_{\epsilon} u$ is an integral defined as \[ L_{\epsilon} u = \int_{0}^{\pi} \epsilon^{-3} J( \frac{y-x}{\epsilon} ) ( u(y) - u(x) ) dy. \]and $J(x)$ is a non-negative radially symmetric function with $J(0) > 0$. It is shown that as the scaling parameter $\epsilon$ is small enough the equation has the pitchfork bifurcations at the spectrum of the operator $L_{\epsilon} u$. A concrete example is considered. The bifurcations result is verified in the concrete example by solving the equation with Newton's Method.
Show less
- Title
- Design and simulation of single-crystal diamond diodes for high voltage, high power and high temperature applications
- Creator
- Suwanmonkha, Nutthamon
- Date
- 2016
- Collection
- Electronic Theses & Dissertations
- Description
-
ABSTRACTDESIGN AND SIMULATION OF SINGLE-CRYSTAL DIAMOND DIODES FOR HIGH VOLTAGE, HIGH POWER AND HIGH TEMPERATURE APPLICATIONSByNutthamon SuwanmonkhaDiamond has exceptional properties and great potentials for making high-power semiconducting electronic devices that surpass the capabilities of other common semiconductors including silicon. The superior properties of diamond include wide bandgap, high thermal conductivity, large electric breakdown field and fast carrier mobilities. All of these...
Show moreABSTRACTDESIGN AND SIMULATION OF SINGLE-CRYSTAL DIAMOND DIODES FOR HIGH VOLTAGE, HIGH POWER AND HIGH TEMPERATURE APPLICATIONSByNutthamon SuwanmonkhaDiamond has exceptional properties and great potentials for making high-power semiconducting electronic devices that surpass the capabilities of other common semiconductors including silicon. The superior properties of diamond include wide bandgap, high thermal conductivity, large electric breakdown field and fast carrier mobilities. All of these properties are crucial for a semiconductor that is used to make electronic devices that can operate at high power levels, high voltage and high temperature.Two-dimensional semiconductor device simulation software such as Medici assists engineers to design device structures that allow the performance requirements of device applications to be met. Most physical material parameters of the well-known semiconductors are already compiled and embedded in Medici. However, diamond is not one of them. Material parameters of diamond, which include the models for incomplete ionization, temperature-and-impurity-dependent mobility, and impact ionization, are not readily available in software such as Medici. Models and data for diamond semiconductor material have been developed for Medici in the work based on results measured in the research literature and in the experimental work at Michigan State University. After equipping Medici with diamond material parameters, simulations of various diamond diodes including Schottky, PN-junction and merged Schottky/PN-junction diode structures are reported. Diodes are simulated versus changes in doping concentration, drift layer thickness and operating temperature. In particular, the diode performance metrics studied include the breakdown voltage, turn-on voltage, and specific on-resistance. The goal is to find the designs which yield low power loss and provide high voltage blocking capability. Simulation results are presented that provide insight for the design of diamond diodes using the various diode structures. Results are also reported on the use of field plate structures in the simulations to control the electric field and increase the breakdown voltage.
Show less
- Title
- Semiparametric models for mouth-level indices in caries research
- Creator
- Yang, Yifan
- Date
- 2016
- Collection
- Electronic Theses & Dissertations
- Description
-
For nonnegative count responses in health services research, a large proportion of zero counts are frequently encountered. For such data, the frequency of zero counts is typically larger than its expected counterpart under the classical parametric models, such as Poisson or negative binomial model. In this thesis, a semiparametric zero-inflated regression model is proposed for count data that directly relates covariates to the marginal mean response representing the desired target of...
Show moreFor nonnegative count responses in health services research, a large proportion of zero counts are frequently encountered. For such data, the frequency of zero counts is typically larger than its expected counterpart under the classical parametric models, such as Poisson or negative binomial model. In this thesis, a semiparametric zero-inflated regression model is proposed for count data that directly relates covariates to the marginal mean response representing the desired target of inference. The model specifically assumes two semiparametric forms: the log-linear form for the marginal mean and the logistic-linear form for the susceptible probability, in which the fully linear models are replaced with partially linear link functions. A spline-based estimation is proposed for the nonparametric components of the model. Asymptotic properties are discussed for the estimators of the parametric and nonparametric components of the models. Specifically, the estimators are shown to be strong consistent and asymptotically efficient under mild regularity conditions. A bootstrap hypothesis test is performed to evaluate difference involving the nonparametric component. Simulation studies are conducted to evaluate the finite sample performance of the model. Finally, the model is applied to dental caries indices in low income African-American children to evaluate the nonlinear effects of sugar intake on caries development. The conclusion shows that the effect of sugar intake on caries indices is nonlinear, especially among young children under the age of 2. And children whose caregivers are unemployed and have poor oral healthy exhibit higher dental caries rates.
Show less
- Title
- Multiscale Gaussian-beam method for high-frequency wave propagation and inverse problems
- Creator
- Song, Chao
- Date
- 2018
- Collection
- Electronic Theses & Dissertations
- Description
-
The existence of Gaussian beam solution to hyperbolic PDEs has been known to the pure mathematics community since sometime in the 1960s [3]. It enjoys popularity afterwards due to its ability to resolve the caustics problem and its efficiency [49, 28, 31]. In this thesis, we will focus on the extension of the multi-scale Gaussian beam method and its application to seismic wave modeling and inversion. In the first part of thesis, we discuss the application of the multi-scale Gaussian beam...
Show moreThe existence of Gaussian beam solution to hyperbolic PDEs has been known to the pure mathematics community since sometime in the 1960s [3]. It enjoys popularity afterwards due to its ability to resolve the caustics problem and its efficiency [49, 28, 31]. In this thesis, we will focus on the extension of the multi-scale Gaussian beam method and its application to seismic wave modeling and inversion. In the first part of thesis, we discuss the application of the multi-scale Gaussian beam method to the inverse problem. A new multi-scale Gaussian beam method is introduced for carrying out true-amplitude prestack migration of acoustic waves. After applying the Born approximation, the migration process is considered as shooting two beams simultaneously from the subsurface point which we want to image. The Multi-scale Gaussian Wavepacket transform provides an efficient and accurate way for both decomposing the perturbation field and initializing Gaussian beam solution. Moreover, we can prescribe both the region of imaging and the range of dipping angles by shooting beams from a subsurface point in the region of imaging. We prove the imaging condition equation rigorously and conduct error analysis. Some numerical approximations are derived to improve the efficiency further. Numerical results in the two-dimensional space demonstrate the performance of the proposed migration algorithm. In the second part of thesis, we propose a new multiscale Gaussian beam method with reinitialization to solve the elastic wave equation in the high frequency regime with different boundary conditions. A novel multiscale transform is proposed to decompose any arbitrary vector-valued function to multiple Gaussian wavepackets with various resolution. After the step of initializing, we derive various rules corresponding to different types of reflection cases. To improve the efficiency and accuracy, we develop a new reinitialization strategy based on the stationary phase approximation method to sharpen each single beam ansatz. This is especially useful and necessary in some reflection cases. Numerical examples with various parameters demonstrate the correctness and robustness of the whole method. There are two boundary conditions considered here, the periodic and the Dirichlet boundary condition. In the end, we show that the convergence rate of the proposed multiscale Gaussian beam method follows the convergence rate of the classical Gaussian beam solution.
Show less
- Title
- On minimization of some non-smooth convex functionals arising in micromagnetics
- Creator
- Gao, Hongli
- Date
- 2015
- Collection
- Electronic Theses & Dissertations
- Description
-
This thesis is motivated by studying the properties of ferromagnetic materials usingthe Landau-Lifshitz theory of micromagnetics. In this theory the state of a ferromagneticmaterial is described by the magnetization vector m in terms of a total micromagnetic energythat consists of several competing sub-energies: exchange energy, anisotropy energy, externalinteraction energy and magnetostatic energy. For large ferromagnetic materials and undersome limiting regimes of the model, the exchange...
Show moreThis thesis is motivated by studying the properties of ferromagnetic materials usingthe Landau-Lifshitz theory of micromagnetics. In this theory the state of a ferromagneticmaterial is described by the magnetization vector m in terms of a total micromagnetic energythat consists of several competing sub-energies: exchange energy, anisotropy energy, externalinteraction energy and magnetostatic energy. For large ferromagnetic materials and undersome limiting regimes of the model, the exchange energy can be negligible and the totalenergy becomes a reduced model. Our investigations focus on the study of such a reducedmodel of Landau-Lifshitz theory.The primary focus of the thesis includes two parts: the minimization (static) study andthe evolution (dynamic) study. We investigate a new method for the existence of minimizersof the reduced micromagnetic energy based on a duality method. In this method, the reducedmicromagnetic energy is closely related to a convex functional (the dual functional) on thecurl-free vector functions. Our minimization and dynamics studies are based on the studyof the minimization and gradient ow of this dual functional. Much of the thesis is focusedon the minimization problem of two special cases: soft case and uniaxial case on the annulusdomain; in particular, in the soft case, for some range of the parameter, the energy minimizersof the original micromagnetic energy are constructed through the Euler-Lagrange equationof the dual functional using the characteristics method for a reduced Eikonal type equation.The second direction of our study of this thesis is an attempt to obtain certain reasonabledynamic process for the evolution of m, where the asymptotic behavior of the gradient owof the reduced energy functional is investigated.
Show less
- Title
- Sharp estimates in harmonic analysis
- Creator
- Rey, Guillermo
- Date
- 2015
- Collection
- Electronic Theses & Dissertations
- Description
-
We investigate certain sharp estimates related to singular integrals. In particular we give sharp level set estimates for sparse operators, we show how to reduce the problem of estimating Calder\'on-Zygmund operators by sparse operators, and we study some weighted inequalities for these operators.
- Title
- Statistical properties of some almost Anosov systems
- Creator
- Zhang, Xu
- Date
- 2016
- Collection
- Electronic Theses & Dissertations
- Description
-
We investigate the polynomial lower and upper bounds for decay of correlations of a class of two-dimensional almost Anosov diffeomorphisms with respect to their Sinai-Ruelle-Bowen measures (SRB measures), where the almost Anosov diffeomorphism is a system which is hyperbolic everywhere except for one point. At the indifferent fixed point, the Jacobian matrix is an identity matrix. The degrees of the bounds are determined by the expansion and contraction rates as the orbits approach the...
Show moreWe investigate the polynomial lower and upper bounds for decay of correlations of a class of two-dimensional almost Anosov diffeomorphisms with respect to their Sinai-Ruelle-Bowen measures (SRB measures), where the almost Anosov diffeomorphism is a system which is hyperbolic everywhere except for one point. At the indifferent fixed point, the Jacobian matrix is an identity matrix. The degrees of the bounds are determined by the expansion and contraction rates as the orbits approach the indifferent fixed point, and can be expressed by using coefficients of the third order terms in the Taylor expansions of the diffeomorphisms at the indifferent fixed points.We discuss the relationship between the existence of SRB measures and the differentia- bility of some almost Anosov diffeomorphisms near the indifferent fixed points in dimensions bigger than one. The eigenvalue of Jacobian matrix at the indifferent fixed point along the one-dimensional contraction subspace is less than one, while the other eigenvalues along the expansion subspaces are equal to one. As a consequence, there are twice-differentiable al- most Anosov diffeomorphisms that admit infinite SRB measures in two or three-dimensional spaces; there exist twice-differentiable almost Anosov diffeomorphisms with SRB measures in dimensions bigger than three. Further, we obtain the polynomial lower and upper bounds for the correlation functions of these almost Anosov maps that admit SRB measures.
Show less
- Title
- Multiscale modeling of polymer nanocomposites
- Creator
- Sheidaei, Azadeh
- Date
- 2015
- Collection
- Electronic Theses & Dissertations
- Description
-
In recent years, polymer nano-composites (PNCs) have increasingly gained more attention due to their improved mechanical, barrier, thermal, optical, electrical and biodegradable properties in comparison with the conventional micro-composites or pristine polymer. With a modest addition of nanoparticles (usually less than 5wt. %), PNCs offer a wide range of improvements in moduli, strength, heat resistance, biodegradability, as well as decrease in gas permeability and flammability. Although...
Show moreIn recent years, polymer nano-composites (PNCs) have increasingly gained more attention due to their improved mechanical, barrier, thermal, optical, electrical and biodegradable properties in comparison with the conventional micro-composites or pristine polymer. With a modest addition of nanoparticles (usually less than 5wt. %), PNCs offer a wide range of improvements in moduli, strength, heat resistance, biodegradability, as well as decrease in gas permeability and flammability. Although PNCs offer enormous opportunities to design novel material systems, development of an effective numerical modeling approach to predict their properties based on their complex multi-phase and multiscale structure is still at an early stage. Developing a computational framework to predict the mechanical properties of PNC is the focus of this dissertation. A computational framework has been developed to predict mechanical properties of polymer nano-composites. In chapter 1, a microstructure inspired material model has been developed based on statistical technique and this technique has been used to reconstruct the microstructure of Halloysite nanotube (HNT) polypropylene composite. This technique also has been used to reconstruct exfoliated Graphene nanoplatelet (xGnP) polymer composite. The model was able to successfully predict the material behavior obtained from experiment. Chapter 2 is the summary of the experimental work to support the numerical work. First, different processing techniques to make the polymer nanocomposites have been reviewed. Among them, melt extrusion followed by injection molding was used to manufacture high density polyethylene (HDPE) – xGnP nanocomposties. Scanning electron microscopy (SEM) also was performed to determine particle size and distribution and to examine fracture surfaces. Particle size was measured from these images and has been used for calculating the probability density function for GNPs in chapter 1. A series of nanoindentation tests have been conducted to reveal the spatial variation of the superstructure developed along and across the flow direction of injection-molded HDPE/GNP.The uniaxial tensile test and shear test have been conducted on HDPE and xGnP/HDPE specimens. The stress-strain curves for HDPE obtained from these experiments have been used in chapter 5 to calibrate the modified Gurson–Tvergaard–Needleman to capture the damage progression in HDPE. In chapter 3, the 3D microstructure model developed in chapter 1 was incorporated in a damage modeling problem in nanocomposite where damage initiation has been modeled using cohesive-zone model. There is a significant difference between the properties of inclusion and the host polymer in polymer nanocomposite, which leads to the damage evolution during deformation due to a huge stress concentration between nanofiller and polymer. The finite element model of progressive debonding in nano-reinforced composite has been proposed based on the cohesive-zone model of the interface. In order to model cohesive-zone, a cohesive zone traction displacement relation is needed. This curve may be obtained either through a fiber pullout experiment or by simulating the test using molecular dynamics. In the case of nano-fillers, conducting fiber pullout test is very difficult and result is often not reproducible. In chapter 4, molecular dynamics simulation of polymer nanocomposite has been performed. One of the goals was to extract the load-displacement curves of graphene/HDPE pullout test and obtain cohesive zone parameters in chapter 3. Finally, in chapter 5, a damage model of HDPE/GNP nanocomposite has been developed based on matrix cracking and fiber debonding. This 3D microstructure model was incorporated in a damage modeling problem in nanocomposite where damage initiation and damage progression have been modeled using cohesive-zone and modified Gurson-Tvergaard-Needleman (GTN) material models.
Show less
- Title
- Kernel methods for biosensing applications
- Creator
- Khan, Hassan Aqeel
- Date
- 2015
- Collection
- Electronic Theses & Dissertations
- Description
-
This thesis examines the design noise robust information retrieval techniques basedon kernel methods. Algorithms are presented for two biosensing applications: (1)High throughput protein arrays and (2) Non-invasive respiratory signal estimation.Our primary objective in protein array design is to maximize the throughput byenabling detection of an extremely large number of protein targets while using aminimal number of receptor spots. This is accomplished by viewing the proteinarray as a...
Show moreThis thesis examines the design noise robust information retrieval techniques basedon kernel methods. Algorithms are presented for two biosensing applications: (1)High throughput protein arrays and (2) Non-invasive respiratory signal estimation.Our primary objective in protein array design is to maximize the throughput byenabling detection of an extremely large number of protein targets while using aminimal number of receptor spots. This is accomplished by viewing the proteinarray as a communication channel and evaluating its information transmission capacity as a function of its receptor probes. In this framework, the channel capacitycan be used as a tool to optimize probe design; the optimal probes being the onesthat maximize capacity. The information capacity is first evaluated for a small scaleprotein array, with only a few protein targets. We believe this is the first effort toevaluate the capacity of a protein array channel. For this purpose models of theproteomic channel's noise characteristics and receptor non-idealities, based on experimental prototypes, are constructed. Kernel methods are employed to extend thecapacity evaluation to larger sized protein arrays that can potentially have thousandsof distinct protein targets. A specially designed kernel which we call the ProteomicKernel is also proposed. This kernel incorporates knowledge about the biophysicsof target and receptor interactions into the cost function employed for evaluation of channel capacity.For respiratory estimation this thesis investigates estimation of breathing-rateand lung-volume using multiple non-invasive sensors under motion artifact and highnoise conditions. A spirometer signal is used as the gold standard for evaluation oferrors. A novel algorithm called the segregated envelope and carrier (SEC) estimation is proposed. This algorithm approximates the spirometer signal by an amplitudemodulated signal and segregates the estimation of the frequency and amplitude in-formation. Results demonstrate that this approach enables effective estimation ofboth breathing rate and lung volume. An adaptive algorithm based on a combination of Gini kernel machines and wavelet filltering is also proposed. This algorithm is titledthe wavelet-adaptive Gini (or WAGini) algorithm, it employs a novel wavelet trans-form based feature extraction frontend to classify the subject's underlying respiratorystate. This information is then employed to select the parameters of the adaptive kernel machine based on the subject's respiratory state. Results demonstrate significantimprovement in breathing rate estimation when compared to traditional respiratoryestimation techniques.
Show less
- Title
- Optimal design problems in thin-film and diffractive optics
- Creator
- Wang, Yuliang
- Date
- 2013
- Collection
- Electronic Theses & Dissertations
- Description
-
Optical components built from thin-film layered structures are technologically very important. Applications include but are not limited to energy conversion and conservation, data transmission and conversion, space technology, imaging and so on. In practice these structures are defined by various parameters such as the refractive-index profile, the layer thickness and the period. The problem to find the combination of parameters which yield the spectral response closest to a given target...
Show moreOptical components built from thin-film layered structures are technologically very important. Applications include but are not limited to energy conversion and conservation, data transmission and conversion, space technology, imaging and so on. In practice these structures are defined by various parameters such as the refractive-index profile, the layer thickness and the period. The problem to find the combination of parameters which yield the spectral response closest to a given target function is referred to as optimal design. This dissertation considers several topics in the mathematical modeling and optimal design of these structures through numerical optimization. A key step in numerical optimization is to define an objective function the measures the discrepancy between the target performance and that of the current solution. Our first topic is the impact of the objective function, its metric in particular, on the optimal solution and its performance. This is done by numerical experiments with different types of antireflection coatings using two-material multilayers. The results confirm existing statements and provide a few new findings, e.g. some specific metrics can yield particularly better solutions than others. Rugates are optical coatings presenting continuous refractive-index profiles. They have received much attention recently due to technological advances and their potential better optical performance and environmental properties. The Fourier transform method is a widely used technique for the design of rugates. However, it is based on approximate expressions with strict assumptions and has many practical limitations. Our second topic is the optimal design of rugates through numerical optimization of objective functions with penalty terms. We found solutions with similar performance and novel solutions by using different metrics in the penalty term. Existing methods used only local basis functions such as piece-wise constant or linear functions for the discretization of the refractive-index profile. Our third topic is the use global basis functions such as sinusoidal functions in the discretization. A simple transformation is used to overcome the difficulty of bound constraints and the result is very promising. Both multilayer and rugate coatings can be obtained using this method. Diffraction gratings are thin-film structures whose optical properties vary periodically along one or two directions. Our final topic is the optimal design of such structures in the broadband case. The objective functions and their gradient are obtained by solving variational problems and their adjoints with finite element method. Interesting phenomena are observed in the numerical experiments. Limitations and future work in this direction are pointed out.
Show less
- Title
- Structure and evolutionary dynamics in fitness landscapes
- Creator
- Pakanati, Anuraag R.
- Date
- 2015
- Collection
- Electronic Theses & Dissertations
- Description
-
Evolution can be conceptualized as an optimization algorithm that allows populations to search through genotypes for those that produce high fitness solutions. This search process is commonly depicted as exploring a “fitness landscape”, which combines similarity relationships among genotypes with the concept of a genotype-fitness map. As populations adapt to their fitness landscape, they accumulate information about the fitness landscape in which they live. A greater understanding of...
Show moreEvolution can be conceptualized as an optimization algorithm that allows populations to search through genotypes for those that produce high fitness solutions. This search process is commonly depicted as exploring a “fitness landscape”, which combines similarity relationships among genotypes with the concept of a genotype-fitness map. As populations adapt to their fitness landscape, they accumulate information about the fitness landscape in which they live. A greater understanding of evolution on fitness landscapes will help elucidate fundamental evolutionary processes. I examine methods of estimating information acquisition in evolving populations and find that these techniques have largely ignored the effects of common descent. Since information is estimated by measuring conserved genomic regions across a population, common descent can create a severe bias by increasing similarities among unselected regions. I introduce a correction method to compensate for the effects of common descent on genomic information and empirically demonstrate its efficacy.Next, I explore three instantiations of NK, Avida, and RNA fitness landscapes to better understand structural properties such as the distribution of peaks and the size of basins of attraction. I find that the fitness of peaks is correlated with the fitness of peaks within their neighborhood, and that the size of peaks' basins of attraction tends to be proportional to the heights of the peaks. Finally, I visualize local dynamics and perform a detailed comparison between the space of what evolutionary trajectories are technically possible from a single starting point and the results of actual evolving populations.
Show less
- Title
- A stochastic multi-scale model of stream-groundwater interaction in strongly heterogeneous porous medium and its application in southern Branch County, Michigan
- Creator
- Xinyu, Ye
- Date
- 2014
- Collection
- Electronic Theses & Dissertations
- Description
-
In this paper, stream depletion is assessed by the approach of multi-scale geostatistics in stressed watershed, South Branch County, Michigan. The watershed is currently under large water demand and representative of the general failure to pass the online Water Withdrawal Assessment Tool. Due to the heterogeneity of porous medium and the high variability of hydrogeological parameters and scale, there is a deviation between field observations and simulated groundwater flow in those areas. The...
Show moreIn this paper, stream depletion is assessed by the approach of multi-scale geostatistics in stressed watershed, South Branch County, Michigan. The watershed is currently under large water demand and representative of the general failure to pass the online Water Withdrawal Assessment Tool. Due to the heterogeneity of porous medium and the high variability of hydrogeological parameters and scale, there is a deviation between field observations and simulated groundwater flow in those areas. The approach of multi-scale geostatistics model based on detailed lithological data and its application in numerical groundwater simulation can be used in stream depletion assessment. Specifically, the multi-scale transition probability geostatistics approach, supplemented with a 10m Digital Elevation Model, allows for a more realistic integration of heterogeneous medium into the development of correlated spatial variability of hydrogeological parameters at each spatial scale. This approach enables accurate simulation of complex hydrogeology, including vertical shift structural variation and aquifer thickness variations. Systematic hydrology models at the regional, local and site scale allows for simulations of integrated water budget analysis. These simulations are necessary to evaluate the water depletions of targeted streams and the surrounded protected area. The hydrology system is calibrated with the steady state water levels from 732 monitoring wells.The stability of transition probability geostatistics model depends on the distributions, the heterogeneity of simulated area and other factors. The results show that transition probability geostatistics model provides a reasonable distribution of materials in aquifer medium, improving numerical groundwater modeling in assessing water depletion in streams and venerable area.
Show less