You are here
Search results
(1  20 of 96)
Pages
 Title
 Modeling emerging solar cell materials and devices
 Creator
 Thongprong, Non
 Date
 2017
 Collection
 Electronic Theses & Dissertations
 Description

Organic photovoltaics (OPVs) and perovskite solar cells are emerging classes of solar cell that are promising for clean energy alternatives to fossil fuels. Understanding fundamental physics of these materials is crucial for improving their energy conversion efficiencies and promoting them to practical applications.Current densityvoltage (JV) curves; which are important indicators of OPV efficiency, have direct connections to many fundamental properties of solar cells. They can be described...
Show moreOrganic photovoltaics (OPVs) and perovskite solar cells are emerging classes of solar cell that are promising for clean energy alternatives to fossil fuels. Understanding fundamental physics of these materials is crucial for improving their energy conversion efficiencies and promoting them to practical applications.Current densityvoltage (JV) curves; which are important indicators of OPV efficiency, have direct connections to many fundamental properties of solar cells. They can be described by the Shockley diode equation, resulting in fitting parameters; series and parallel resistance (Rs and Rp), diode saturation current (J0) and ideality factor (n). However, the Shockley equation was developed specifically for inorganic pn junction diodes, so it lacks physical meanings when it is applied to OPVs. Hence, the purposes of this work are to understand the fundamental physics of OPVs and to develop new diode equations in the same form as the Shockley equation that are based on OPV physics.We develop a numerical driftdiffusion simulation model to study bilayer OPVs, which will be called the driftdiffusion for bilayer interface (DDBI) model. The model solves Poisson, driftdiffusion and currentcontinuity equations selfconsistently for charge densities and potential profiles of a bilayer device with an organic heterojunction interface described by the GWWF model. We also derive new diode equations that have JV curves consistent with the DDBI model and thus will be called selfconsistent diode (SCD) equations.Using the DDBI and the SCD model allows us to understand working principles of bilayer OPVs and physical definitions of the Shockley parameters. Due to low carrier mobilities in OPVs, space charge accumulation is common especially near the interface and electrodes. Hence, quasiFermi levels (i.e. chemical potentials), which depend on charge densities, are modified around the interface, resulting in a splitting of quasiFermi levels that works as a driving potential for the heterojunction diode. This brings about the meaning of Rs as the resistance that gives rise to the diode voltage equal to the interface quasiFermi level splitting instead of the voltage between the electrodes. QuasiFermi levels that drop near the electrodes because of unmatched electrode work functions or due to charge injection can also increase Rs. Furthermore, we are able to study dissociation and recombination rates of bound charge pairs across the interface (i.e. polaron pairs or PPs) and arrive at the physical meaning of Rp as recombination resistance of PPs. In the dark, PP density is very low, so Rp is possibly caused by a tunneling leakage current at the interface. Ideality factors areparameters that depend on the split of quasiFermi levels and the ratio of recombination rate to recombination rate at equilibrium. Even though they are related to trap characteristics as normally understood, their relations are complicated and careful interpretations of fitted ideality factors are needed. Our models are successfully applied to actual devices, and useful physics can be deduced, for example differences between the Shockley parameters under dark and illumination conditions.Another purpose of this thesis is to study electronic properties of CsSnBr3 perovskite and processes of growing the perovskite film using an epitaxy technique. Calculation results using density functional theory reveal that a CsSnBr3 film that is grown on a NaCl(100) substrate can undergo a phase transition to CsSn2Br5, which is a widebandgap semiconductor material. Actual mechanisms of the transition and the interface between CsSnBr3 and CsSn2Br5are interesting for future studies.
Show less
 Title
 Multiscale modeling of polymer nanocomposites
 Creator
 Sheidaei, Azadeh
 Date
 2015
 Collection
 Electronic Theses & Dissertations
 Description

In recent years, polymer nanocomposites (PNCs) have increasingly gained more attention due to their improved mechanical, barrier, thermal, optical, electrical and biodegradable properties in comparison with the conventional microcomposites or pristine polymer. With a modest addition of nanoparticles (usually less than 5wt. %), PNCs offer a wide range of improvements in moduli, strength, heat resistance, biodegradability, as well as decrease in gas permeability and flammability. Although...
Show moreIn recent years, polymer nanocomposites (PNCs) have increasingly gained more attention due to their improved mechanical, barrier, thermal, optical, electrical and biodegradable properties in comparison with the conventional microcomposites or pristine polymer. With a modest addition of nanoparticles (usually less than 5wt. %), PNCs offer a wide range of improvements in moduli, strength, heat resistance, biodegradability, as well as decrease in gas permeability and flammability. Although PNCs offer enormous opportunities to design novel material systems, development of an effective numerical modeling approach to predict their properties based on their complex multiphase and multiscale structure is still at an early stage. Developing a computational framework to predict the mechanical properties of PNC is the focus of this dissertation. A computational framework has been developed to predict mechanical properties of polymer nanocomposites. In chapter 1, a microstructure inspired material model has been developed based on statistical technique and this technique has been used to reconstruct the microstructure of Halloysite nanotube (HNT) polypropylene composite. This technique also has been used to reconstruct exfoliated Graphene nanoplatelet (xGnP) polymer composite. The model was able to successfully predict the material behavior obtained from experiment. Chapter 2 is the summary of the experimental work to support the numerical work. First, different processing techniques to make the polymer nanocomposites have been reviewed. Among them, melt extrusion followed by injection molding was used to manufacture high density polyethylene (HDPE) – xGnP nanocomposties. Scanning electron microscopy (SEM) also was performed to determine particle size and distribution and to examine fracture surfaces. Particle size was measured from these images and has been used for calculating the probability density function for GNPs in chapter 1. A series of nanoindentation tests have been conducted to reveal the spatial variation of the superstructure developed along and across the flow direction of injectionmolded HDPE/GNP.The uniaxial tensile test and shear test have been conducted on HDPE and xGnP/HDPE specimens. The stressstrain curves for HDPE obtained from these experiments have been used in chapter 5 to calibrate the modified Gurson–Tvergaard–Needleman to capture the damage progression in HDPE. In chapter 3, the 3D microstructure model developed in chapter 1 was incorporated in a damage modeling problem in nanocomposite where damage initiation has been modeled using cohesivezone model. There is a significant difference between the properties of inclusion and the host polymer in polymer nanocomposite, which leads to the damage evolution during deformation due to a huge stress concentration between nanofiller and polymer. The finite element model of progressive debonding in nanoreinforced composite has been proposed based on the cohesivezone model of the interface. In order to model cohesivezone, a cohesive zone traction displacement relation is needed. This curve may be obtained either through a fiber pullout experiment or by simulating the test using molecular dynamics. In the case of nanofillers, conducting fiber pullout test is very difficult and result is often not reproducible. In chapter 4, molecular dynamics simulation of polymer nanocomposite has been performed. One of the goals was to extract the loaddisplacement curves of graphene/HDPE pullout test and obtain cohesive zone parameters in chapter 3. Finally, in chapter 5, a damage model of HDPE/GNP nanocomposite has been developed based on matrix cracking and fiber debonding. This 3D microstructure model was incorporated in a damage modeling problem in nanocomposite where damage initiation and damage progression have been modeled using cohesivezone and modified GursonTvergaardNeedleman (GTN) material models.
Show less
 Title
 The mathematical models of nutritional plasticity and the bifurcation in a nonlocal diffusion equation
 Creator
 Liang, Yu, Ph. D.
 Date
 2016
 Collection
 Electronic Theses & Dissertations
 Description

The thesis consists of two parts. In the first part, I investigate the developmental mechanisms that regulate the nutritional plasticity of organ sizes in Drosophila melanganster, fruit fly. Here I focus on the insulinlike signalling pathway through which the developmental nutrition is signalled to growing organs. Two mathematical models, an ODE model and a PDE model, are established based on the IIS pathway. In the ODE model, the circulating gene expression of each components in IIS pathway...
Show moreThe thesis consists of two parts. In the first part, I investigate the developmental mechanisms that regulate the nutritional plasticity of organ sizes in Drosophila melanganster, fruit fly. Here I focus on the insulinlike signalling pathway through which the developmental nutrition is signalled to growing organs. Two mathematical models, an ODE model and a PDE model, are established based on the IIS pathway. In the ODE model, the circulating gene expression of each components in IIS pathway is considered as model variables. By analyzing the steady states of the ODE model under different parameter settings, the hypothesis that the difference of the nutritional plasticity among all organs of Drosophila is due to the variation of the total gene expressions of components in IIS pathway is verified. Furthermore, the forkhead transcription factor FOXO, a negative growth regulator that is activated when nutrition and insulin signaling are low is a key factor to maintain organspecific differences in nutritionalplasticity and insulinsensitivity. In the PDE model, I focus more on the molecule structure within each individual cell. The transportation of proteins between nucleus and cell membrane is modelled in the system. In simulations of the PDEs system, the hypothesis that the concentration of FOXO decrease as the concentration of insulin increase is verified.In the second part of the thesis, I study the bifurcation properties of the nonlocal diffusion equation:\[ L_{\epsilon} u + \lambda (u  u^3) = 0. \]where $L_{\epsilon} u$ is an integral defined as \[ L_{\epsilon} u = \int_{0}^{\pi} \epsilon^{3} J( \frac{yx}{\epsilon} ) ( u(y)  u(x) ) dy. \]and $J(x)$ is a nonnegative radially symmetric function with $J(0) > 0$. It is shown that as the scaling parameter $\epsilon$ is small enough the equation has the pitchfork bifurcations at the spectrum of the operator $L_{\epsilon} u$. A concrete example is considered. The bifurcations result is verified in the concrete example by solving the equation with Newton's Method.
Show less
 Title
 Design and simulation of singlecrystal diamond diodes for high voltage, high power and high temperature applications
 Creator
 Suwanmonkha, Nutthamon
 Date
 2016
 Collection
 Electronic Theses & Dissertations
 Description

ABSTRACTDESIGN AND SIMULATION OF SINGLECRYSTAL DIAMOND DIODES FOR HIGH VOLTAGE, HIGH POWER AND HIGH TEMPERATURE APPLICATIONSByNutthamon SuwanmonkhaDiamond has exceptional properties and great potentials for making highpower semiconducting electronic devices that surpass the capabilities of other common semiconductors including silicon. The superior properties of diamond include wide bandgap, high thermal conductivity, large electric breakdown field and fast carrier mobilities. All of these...
Show moreABSTRACTDESIGN AND SIMULATION OF SINGLECRYSTAL DIAMOND DIODES FOR HIGH VOLTAGE, HIGH POWER AND HIGH TEMPERATURE APPLICATIONSByNutthamon SuwanmonkhaDiamond has exceptional properties and great potentials for making highpower semiconducting electronic devices that surpass the capabilities of other common semiconductors including silicon. The superior properties of diamond include wide bandgap, high thermal conductivity, large electric breakdown field and fast carrier mobilities. All of these properties are crucial for a semiconductor that is used to make electronic devices that can operate at high power levels, high voltage and high temperature.Twodimensional semiconductor device simulation software such as Medici assists engineers to design device structures that allow the performance requirements of device applications to be met. Most physical material parameters of the wellknown semiconductors are already compiled and embedded in Medici. However, diamond is not one of them. Material parameters of diamond, which include the models for incomplete ionization, temperatureandimpuritydependent mobility, and impact ionization, are not readily available in software such as Medici. Models and data for diamond semiconductor material have been developed for Medici in the work based on results measured in the research literature and in the experimental work at Michigan State University. After equipping Medici with diamond material parameters, simulations of various diamond diodes including Schottky, PNjunction and merged Schottky/PNjunction diode structures are reported. Diodes are simulated versus changes in doping concentration, drift layer thickness and operating temperature. In particular, the diode performance metrics studied include the breakdown voltage, turnon voltage, and specific onresistance. The goal is to find the designs which yield low power loss and provide high voltage blocking capability. Simulation results are presented that provide insight for the design of diamond diodes using the various diode structures. Results are also reported on the use of field plate structures in the simulations to control the electric field and increase the breakdown voltage.
Show less
 Title
 I. amhb : (anti)aromaticitymodulated hydrogen bonding. ii. evaluation of implicit solvation models for predicting hydrogen bond free energies
 Creator
 Kakeshpour, Tayeb
 Date
 2019
 Collection
 Electronic Theses & Dissertations
 Description

My doctoral research under Professor James E. Jackson focused on hydrogen bonding (Hbonding) using physical organic chemistry tools. In the first chapter, I present how I used quantum chemical simulations, synthetic organic chemistry, NMR spectroscopy, and Xray crystallography to provide robust theoretical and experimental evidence for an interplay between (anti)aromaticity and Hbond strength of heterocycles, a concept that we dubbed (Anti)aromaticityModulated Hydrogen Bonding (AMHB). In...
Show moreMy doctoral research under Professor James E. Jackson focused on hydrogen bonding (Hbonding) using physical organic chemistry tools. In the first chapter, I present how I used quantum chemical simulations, synthetic organic chemistry, NMR spectroscopy, and Xray crystallography to provide robust theoretical and experimental evidence for an interplay between (anti)aromaticity and Hbond strength of heterocycles, a concept that we dubbed (Anti)aromaticityModulated Hydrogen Bonding (AMHB). In the second chapter, I used accurately measured hydrogen bond energies for a range of substrates and solvents to evaluate the performance of implicit solvation models in combination with density functional methods for predicting solution phase hydrogen bond energies. This benchmark study provides useful guidelines for a priori modeling of hydrogen bondingbased designs.Coordinates of the optimized geometries and crystal structures are provided as supplementary materials.
Show less
 Title
 Modeling galactic chemical evolution in cosmological simulations
 Creator
 Peruta, Carolyn Cynthia
 Date
 2013
 Collection
 Electronic Theses & Dissertations
 Description

The most fundamental challenges to models of galactic chemical evolution (GCE) are uncertainties in the basic inputs, including the properties of the stellar initial mass function (IMF), stellar nucleosynthetic yields, and the rate of return of mass and energy to the interstellar and intergalactic medium by Type Ia and II supernovae and stellar winds. In this dissertation, we provide a critical examination of widely available stellar nucleosynthetic yield data, with an eye toward modeling GCE...
Show moreThe most fundamental challenges to models of galactic chemical evolution (GCE) are uncertainties in the basic inputs, including the properties of the stellar initial mass function (IMF), stellar nucleosynthetic yields, and the rate of return of mass and energy to the interstellar and intergalactic medium by Type Ia and II supernovae and stellar winds. In this dissertation, we provide a critical examination of widely available stellar nucleosynthetic yield data, with an eye toward modeling GCE in the broad scope of cosmological hydrodynamical simulations. We examine the implications of uncertain inputs for the Galactic stellar IMF, and nucleosynthetic yields from stellarevolution calculations, on our ability to ask detailed questions regarding the observed Galactic chemicalabundance patterns. We find a marked need for stellar feedback data from stars of initial mass 8 to 12 M_{sun} and above 40 M_{sun}, and for initial stellar metallicities above and below solar, Z_{sun}=0.02. We find the largest discrepancies amongst nucleosynthetic yield calculations are due to various groups' treatment of hot bottom burning, formation of the 13C pocket in asymptotic giantbranch (AGB) stars, and details of mass loss, rotation, and convection in all stars. Our model of GCE is used to postprocess simulations to explore in greater detail the nucleosynthetic evolution of the stellar populations and interstellar/intergalactic medium, and to compare directly to the chemical abundances of the Milky Way stellar halo and dwarf spheroidal galaxy stellar populations.
Show less
 Title
 Effect of pavement structural response on rolling resistance and fuel economy
 Creator
 Balzarini, Danilo
 Date
 2019
 Collection
 Electronic Theses & Dissertations
 Description

The massive use of fuel required by road transportation is accountable for the exploitation of nonrenewable energy sources, is a major source of pollutants emission, and implies high economic costs. Rolling resistance is a factor affecting vehicles energy consumption; the structural rolling resistance (SRR) is the component of rolling resistance that occurs due to the deformation of the pavement structure. The present research presents an investigation on the SRR in order to identify its...
Show moreThe massive use of fuel required by road transportation is accountable for the exploitation of nonrenewable energy sources, is a major source of pollutants emission, and implies high economic costs. Rolling resistance is a factor affecting vehicles energy consumption; the structural rolling resistance (SRR) is the component of rolling resistance that occurs due to the deformation of the pavement structure. The present research presents an investigation on the SRR in order to identify its causes, characterize it and develop the instruments to predict its impact on fuel consumption for different road and traffic conditions.First the methods to calculate the SRR on asphalt and concrete pavements were developed. The structural rolling resistance is calculated as the resistance to motion caused by the uphill slope seen by the tires due to the pavement deformation. The SRR can be converted into fuel consumption using the calorific value of the fuel and the engine efficiency, and the greenhouse gas emissions associated with it can be calculated.Purely mechanistic models were used to determine the structural rolling resistance, and the fuel consumption associated with it, on 17 California pavement sections under different loading and environmental conditions. The results were used to develop simple and rapidtouse mechanistic empirical heuristic models to predict the energy dissipation associated with the structural rolling resistance on any asphalt or concrete pavement.The difference in terms of fuel consumption and pollutants emissions between different pavement structures can be significant and could be included in economic evaluations and life cycle assessment studies. For this purpose, a practical tool was createddeveloped, based on the heuristic models, that allows the calculation of the fuel consumption associated with the SRR for any given traffic and pavement section. Examples of applications of such a tool are presented and discussed.
Show less
 Title
 Comparison of methods for detecting violations of measurement invariance with continuous construct indicators using latent variable modeling
 Creator
 Zhang, Mingcai (Graduate of Michigan State University)
 Date
 2020
 Collection
 Electronic Theses & Dissertations
 Description

Measurement invariance (MI) refers to the fact that the measurement instrument measures the same concept in the same way in two or more groups. However, in educational and psychological testing practice, the assumption of MI is often violated due to the contamination by possible noninvariance in the measurement models. In the framework of Latent Variable Modeling (LVM), methodologists have developed different statistical methods to identify the noninvariant components. Among these methods,...
Show moreMeasurement invariance (MI) refers to the fact that the measurement instrument measures the same concept in the same way in two or more groups. However, in educational and psychological testing practice, the assumption of MI is often violated due to the contamination by possible noninvariance in the measurement models. In the framework of Latent Variable Modeling (LVM), methodologists have developed different statistical methods to identify the noninvariant components. Among these methods, the free baseline method (FR) is popularly employed, but this method is limited due to the necessity of choosing a truly invariant reference indicator (RI). Two other methods, namely, the BenjaminiHochberg method (BH) and the alignment method (AM) are exempt from the RI setting. The BH method applies the false discovery rate (FDR) procedure. The AM method aims to optimize the model estimates under the assumption of approximate invariance. The purpose of the present study is to address the problem of RI setting by comparing the BH method and the AM method with the traditional free baseline method through both a simulation study and an empirical data analysis. More specifically, the simulation study is designed to investigate the performances of the three methods through varying the sample sizes and the characteristics of noninvariance embedded in the measurement models. The characteristics of noninvariance are distinguished as the location of noninvariant parameters, the degree of noninvariant parameters, and the magnitude of model noninvariance. The performances of these three methods are also compared on an empirical dataset (Openness for Problem Solving Scale in PISA 2012) that is obtained from three countries (ShanghaiChina, Australia, and the United States).The simulation study finds that the wrong RI choice heavily impacts the FR method, which produces high type I error rates and low statistical power rates. Both the BH method and the AM method perform better than the FR method in this setting. Comparatively speaking, the benefit of the BH method is that it performs the best by achieving high powers for detecting noninvariance. The power rate increases with lowering the magnitude of model noninvariance, and with increasing sample size and degree of noninvariance. The AM method performs the best with respect to type I errors. The type I error rates estimated by the AM method are low under all simulation conditions. In the empirical study, both the BH method and the AM method perform similarly in estimating the invariance/noninvariance patterns among the three country pairs. However, the FR method, for which the RI is the first item by default, recovers a different invariance/noninvariance pattern. The results can help the methodologists gain a better understanding of the potential advantages of the BH method and the AM method over the traditional FR method. The study results also highlight the importance of correctly specifying the model noninvariance at the indicator level. Based on the characteristics of the noninvariant components, practitioners may consider deleting/modifying the noninvariant indicators or free the noninvariant components while building partial invariant models in order to improve the quality of crossgroup comparisons.
Show less
 Title
 Multiscale Gaussianbeam method for highfrequency wave propagation and inverse problems
 Creator
 Song, Chao
 Date
 2018
 Collection
 Electronic Theses & Dissertations
 Description

The existence of Gaussian beam solution to hyperbolic PDEs has been known to the pure mathematics community since sometime in the 1960s [3]. It enjoys popularity afterwards due to its ability to resolve the caustics problem and its efficiency [49, 28, 31]. In this thesis, we will focus on the extension of the multiscale Gaussian beam method and its application to seismic wave modeling and inversion. In the first part of thesis, we discuss the application of the multiscale Gaussian beam...
Show moreThe existence of Gaussian beam solution to hyperbolic PDEs has been known to the pure mathematics community since sometime in the 1960s [3]. It enjoys popularity afterwards due to its ability to resolve the caustics problem and its efficiency [49, 28, 31]. In this thesis, we will focus on the extension of the multiscale Gaussian beam method and its application to seismic wave modeling and inversion. In the first part of thesis, we discuss the application of the multiscale Gaussian beam method to the inverse problem. A new multiscale Gaussian beam method is introduced for carrying out trueamplitude prestack migration of acoustic waves. After applying the Born approximation, the migration process is considered as shooting two beams simultaneously from the subsurface point which we want to image. The Multiscale Gaussian Wavepacket transform provides an efficient and accurate way for both decomposing the perturbation field and initializing Gaussian beam solution. Moreover, we can prescribe both the region of imaging and the range of dipping angles by shooting beams from a subsurface point in the region of imaging. We prove the imaging condition equation rigorously and conduct error analysis. Some numerical approximations are derived to improve the efficiency further. Numerical results in the twodimensional space demonstrate the performance of the proposed migration algorithm. In the second part of thesis, we propose a new multiscale Gaussian beam method with reinitialization to solve the elastic wave equation in the high frequency regime with different boundary conditions. A novel multiscale transform is proposed to decompose any arbitrary vectorvalued function to multiple Gaussian wavepackets with various resolution. After the step of initializing, we derive various rules corresponding to different types of reflection cases. To improve the efficiency and accuracy, we develop a new reinitialization strategy based on the stationary phase approximation method to sharpen each single beam ansatz. This is especially useful and necessary in some reflection cases. Numerical examples with various parameters demonstrate the correctness and robustness of the whole method. There are two boundary conditions considered here, the periodic and the Dirichlet boundary condition. In the end, we show that the convergence rate of the proposed multiscale Gaussian beam method follows the convergence rate of the classical Gaussian beam solution.
Show less
 Title
 Near duplicate image search
 Creator
 Li, Fengjie
 Date
 2014
 Collection
 Electronic Theses & Dissertations
 Description

Information retrieval addresses the fundamental problem of how to identify the objects from database that satisfies the information needs of users. Facing the information overload, the major challenge in search algorithm design is to ensure that useful information can be found both accurately and efficiently from large databases.To address this challenge, different indexing and retrieval methods had been proposed for different types of data, namely sparse data (e.g. documents), dense data (e...
Show moreInformation retrieval addresses the fundamental problem of how to identify the objects from database that satisfies the information needs of users. Facing the information overload, the major challenge in search algorithm design is to ensure that useful information can be found both accurately and efficiently from large databases.To address this challenge, different indexing and retrieval methods had been proposed for different types of data, namely sparse data (e.g. documents), dense data (e.g. dense feature vectors) and bagoffeatures (e.g. local feature represented images). For sparse data, inverted index and document retrieval models had been proved to be very effective for large scale retrieval problems. For dense data and bagoffeature data, however, there are still some open problems. For example, Locality Sensitive Hashing, a stateoftheart method for searching high dimensional vectors, often fails to make a good tradeoff between precision and recall. Namely, it tends to achieve high preci sion but with low recall or vice versa. The bagofwords model, a popular approach for searching objects represented bagoffeatures, has a limited performance because of the information loss during the quantization procedure.Since the general problem of searching objects represented in dense vectors and bagoffeatures may be too challenging, in this dissertation, we focus on nearly duplicate search, in which the matched objects is almost identical to the query. By effectively exploring the statistical proper ties of near duplicities, we will be able to design more effective indexing schemes and search algorithms. Thus, the focus of this dissertation is to design new indexing methods and retrieval algorithms, for near duplicate search in large scale databases, that accurately capture the data simi larity and delivers more accurate and efficient search. Below, we summarize the main contributions of this dissertation:Our first contribution is a new algorithm for searching near duplicate bagoffeatures data. The proposed algorithm, named random seeding quantization, is more efficient in generating bagof words representations for near duplicate images. The new scheme is motivated by approximating the optimal partial matching between bagoffeatures, and thus produces a bagofwords representation capturing the true similarities of the data, leading to more accurate and efficient retrieval of bagoffeatures data.Our second contribution, termed Random Projection Filtering, is a search algorithm designed for efficient near duplicate vector search. By explicitly exploiting the statistical properties of near duplicity, the algorithm projects high dimensional vectors into lower dimensional space and filter out irrelevant items. Our effective filtering procedure makes RPF more accurate and efficient to identify nearly duplicate objects in databases.Our third contribution is to develop and evaluate a new randomized range search algorithm for near duplicate vectors in high dimensional spaces, termed as Random Projection Search. Different from RPF, the algorithm presented in this chapter is suitable for a wider range of applications be cause it does not require the sparsity constrains for high search accuracy. The key idea is to project both the data points and the query point into an one dimensional space by a random projection, and perform one dimensional range search to find the subset of data points that are within the range of a given query using binary search. We prove the theoretical guarantee for the proposed algorithm and evaluate its empirical performance on a dataset of 1.1 billion image features.
Show less
 Title
 Mathematical modeling and simulation of mechanoelectrical transducers and nanofluidic channels
 Creator
 Park, Jin Kyoung
 Date
 2014
 Collection
 Electronic Theses & Dissertations
 Description

Remarkable advances in nanotechnology and computational approaches enable researchers to investigate physical and biological phenomena in an atomic or molecular scale. Smallerscale approaches are important to study the transport of ions and/or molecules through ion channels in living organisms as well as exquisitely fabricated nanofluidic channels. Both subjects have similar physical properties and hence they have common mathematical interests and challenges in modeling and simulating the...
Show moreRemarkable advances in nanotechnology and computational approaches enable researchers to investigate physical and biological phenomena in an atomic or molecular scale. Smallerscale approaches are important to study the transport of ions and/or molecules through ion channels in living organisms as well as exquisitely fabricated nanofluidic channels. Both subjects have similar physical properties and hence they have common mathematical interests and challenges in modeling and simulating the transport phenomena. In this work, we first propose and validate a molecular level prototype for mechanoelectrical transducer (MET) channel in mammalian hair cells.Next, we design three ionic diffusive nanofluidic channels with different types of atomic surface charge distribution, and explore the current properties of each channel. We construct the molecular level prototype which consists of a charged blocker, a realistic ion channel and its surrounding membrane. The Gramicidin A channel is employed to demonstrate the realistic channel structure, and the blocker is a positively charged atom of radius $1.5$\AA\, which is placed at the mouth region of the channel. Relocating this blocker along one direction just outside the channel mouth imitates the opening and closing behavior of the MET channel. In our atomic scale design for an ionic diffusive nanofluidic channel, the atomic surface charge distribution is easy to modify by varying quantities and signs of atomic charges which are equally placed slightly above the channel surface. Our proposed nanofluidic systems constitutes a geometrically welldefined cylindrical channel and two reservoirs of KCl solution. For both the mammalian MET channel and the ion diffusive nanofluidic channel, we employ a wellestablished ion channel continuum theory, PoissonNernstPlanck theory, for three dimensional numerical simulations. In particular, for the nanoscaled channel descriptions, the generalized PNP equations are derived by using a variational formulation and by incorporating nonelectrostatic interactions. We utilize several useful mathematical algorithms, such as Dirichlet to Neumann mapping and the matched interface and boundary method, in order to validate the proposed models with charge singularities and complex geometry. Moreover, the secondorder accuracy of the proposed numerical methods are confirmed with our nanofluidic system affected by a single atomic charge and eight atomic charges, and further study the channels with a unipolar charge distribution of negative ions and a bipolar charge distribution. Finally, we analyze electrostatic potential and ion conductance through each channel model under the influence of diverse physical conditions, including external applied voltage, bulk ion concentration and atomic charge. Our MET channel prototype shows an outstanding agreement with experimental observation of rat cochlear outer hair cells in terms of open probability. This result also suggests that the tip link, a connector between adjacent stereocilia, gates the MET channel. Similarly, numerical findings, such as ion selectivity, ion depletion and accumulation, and potential wells, of our proposed ion diffusive realistic nanochannels are in remarkable accordance with those from experimental measurements and numerical simulations in the literature. In addition, simulation results support the controllability of the current within a nanofluidic channel.
Show less
 Title
 Brain connectivity analysis using information theory and statistical signal processing
 Creator
 Wang, Zhe (Software engineer)
 Date
 2017
 Collection
 Electronic Theses & Dissertations
 Description

Connectivity between different brain regions generates our minds. Existing work on brain network analysis has mainly been focused on the characterization of connections between the regions in terms of connectivity and causality. Connectivity measures the dependence between regional brain activities, and causality analysis aims to determine the directionality of information flow among the functionally connected brain regions, and find the relationship between causes and effects.Traditionally,...
Show moreConnectivity between different brain regions generates our minds. Existing work on brain network analysis has mainly been focused on the characterization of connections between the regions in terms of connectivity and causality. Connectivity measures the dependence between regional brain activities, and causality analysis aims to determine the directionality of information flow among the functionally connected brain regions, and find the relationship between causes and effects.Traditionally, the study on connectivity and causality has largely been limited to linear relationships. In this dissertation, as an effort to achieve more accurate characterization of connections between brain regions, we aim to go beyond the linear model, and develop innovative techniques for both nondirectional and directional connectivity analysis. Note that due to variability in the brain connectivity of each individual, the connectivity between two brain regions alone may not be sufficient for brain function analysis, in this research, we also conduct network connectivity pattern analysis, so as to reveal more indepth information.First, we characterize nondirectional connectivity using mutual information (MI). In recent years, MI has gradually appeared as an alternative metric for brain connectivity, since it measures both linear and nonlinear dependence between two brain regions, while the traditional Pearson correlation only measures the linear dependence. We develop an innovative approach to estimate the MI between two functionally connected brain regions and apply it to brain functional magnetic resonance imaging (fMRI) data. It is shown that: on average, cognitively normal subjects show larger mutual information between critical regions than Alzheimer's disease (AD) patients.Second, we develop new methodologies for brain causality analysis based on directed information (DI). Traditionally, brain causality is based on the wellknown Granger Causality (GC) analysis. The validity of GC has been widely recognized. However, it has also been noticed that GC relies heavily on the linear prediction method. When there exists strong nonlinear interactions between two regions, GC analysis may lead to invalid results. In this research, (i) we develop an innovative framework for causality analysis based on directed information (DI), which reflects the information flow from one region to another, and has no modeling constraints on the data. It is shown that DI based causality analysis is effective in capturing both linear and nonlinear causal relationships. (ii) We show the conditional equivalence between the DI Framework and Friston's dynamic causal modeling (DCM), and reveal the relationship between directional information transfer and cognitive state change within the brain. Finally, based on brain network connectivity pattern analysis, we develop a robust method for the AD, mild cognitive impairment (MCI) and normal control (NC) subject classification under size limited fMRI data samples. First, we calculate the Pearson correlation coefficients between all possible ROI pairs in the selected subnetwork and use them to form a feature vector for each subject. Second, we develop a regularized linear discriminant analysis (LDA) approach to reduce the noise effect. The feature vectors are then projected onto a subspace using the proposed regularized LDA, where the differences between AD, MCI and NC subjects are maximized. Finally, a multiclass AdaBoost Classifier is applied to carry out the classification task. Numerical analysis demonstrates that the combination of regularized LDA and the AdaBoost classifier can increase the classification accuracy significantly.
Show less
 Title
 Numerical methods for gravity inversion, synthetic aperture radar, and traveltime tomography
 Creator
 Gao, Qinfeng
 Date
 2017
 Collection
 Electronic Theses & Dissertations
 Description

"Inverse problems have many applications. In this thesis, we focus on designing and implementing numerical methods for three inverse problems: gravity inversion, synthetic aperture radar, and traveltime tomography. We present extensive numerical examples to demonstrate that these algorithms are stable and efficient. In Chapter 2, lowrank approximation is incorporated into a local levelset method for gravity inversion. This change helps to reduce the computational time of the mismatch...
Show more"Inverse problems have many applications. In this thesis, we focus on designing and implementing numerical methods for three inverse problems: gravity inversion, synthetic aperture radar, and traveltime tomography. We present extensive numerical examples to demonstrate that these algorithms are stable and efficient. In Chapter 2, lowrank approximation is incorporated into a local levelset method for gravity inversion. This change helps to reduce the computational time of the mismatch gravity force term on the boundary, and reduces the computational complexity from O(N3 ) to O(N2 ) in 2D and from O(N5 ) to O(N4 ) in 3D. Many numerical results show that the locations of unknown objects are accurately captured by this lowrank levelset method. In Chapter 3, both the wave equation and Radon transform are carried out as an approach to the synthetic aperture radar problem. The waveequationbased method includes harmonic extension at terminal time, solving the wave equation backward using a perfectly matched layer, and Neumann iteration. These two methods provide comparable results and help to prove that a curved flight path is no better than a straight one. In Chapter 4, we implement the finite element method as a penalizationregularizationoperator splitting method for traveltime tomography based on the eikonal equation. Both the travel time and slowness are recovered with this algorithm in both 2D and 3D. Finally, Chapter 5 contains our conclusions."Page ii.
Show less
 Title
 Fast solver for large scale eddy current nondestructive evaluation problems
 Creator
 Lei, Naiguang
 Date
 2014
 Collection
 Electronic Theses & Dissertations
 Description

Eddy current testing plays a very important role in nondestructive evaluations of conducting test samples. Based on Faraday's law, an alternating magnetic field source generates induced currents, called eddy currents, in an electrically conducting test specimen. The eddy currents generate induced magnetic fields that oppose the direction of the inducing magnetic field in accordance with Lenz's law. In the presence of discontinuities in material property or defects in the test specimen, the...
Show moreEddy current testing plays a very important role in nondestructive evaluations of conducting test samples. Based on Faraday's law, an alternating magnetic field source generates induced currents, called eddy currents, in an electrically conducting test specimen. The eddy currents generate induced magnetic fields that oppose the direction of the inducing magnetic field in accordance with Lenz's law. In the presence of discontinuities in material property or defects in the test specimen, the induced eddy current paths are perturbed and the associated magnetic fields can be detected by coils or magnetic field sensors, such as Hall elements or magnetoresistance sensors. Due to the complexity of the test specimen and the inspection environments, the availability of theoretical simulation models is extremely valuable for studying the basic field/flaw interactions in order to obtain a fuller understanding of nondestructive testing phenomena.Theoretical models of the forward problem are also useful for training and validation of automated defect detection systems. Theoretical models generate defect signatures that are expensive to replicate experimentally. In general, modelling methods can be classified into two categories: analytical and numerical. Although analytical approaches offer closed form solution, it is generally not possible to obtain largely due to the complex sample and defect geometries, especially in threedimensional space. Numerical modelling has become popular with advances in computer technology and computational methods. However, due to the huge time consumption in the case of large scale problems, accelerations/fast solvers are needed to enhance numerical models.This dissertation describes a numerical simulation model for eddy current problems using finite element analysis. Validation of the accuracy of this model is demonstrated via comparison with experimental measurements of steam generator tube wall defects. These simulations generating twodimension raster scan data typically takes one to two days on a dedicated eightcore PC. A novel direct integral solver for eddy current problems and GPUbased implementation is also investigated in this research to reduce the computational time.
Show less
 Title
 Credit markets, financial crises, and the macroeconomy
 Creator
 Hyun, Junghwan
 Date
 2014
 Collection
 Electronic Theses & Dissertations
 Description

This study consists of three chapters, each of which is an individual paper. The first chapter investigates how the dynamic process of reallocation of credit across firms behaves before and after financial crises. Applying the methodology proposed by Davis and Haltiwanger (1992) for measuring job reallocation, we construct measures of credit reallocation across Korean firms in the 19812012 period. The credit boom preceding the 1997 financialcrisis featured a modest intensity of credit...
Show moreThis study consists of three chapters, each of which is an individual paper. The first chapter investigates how the dynamic process of reallocation of credit across firms behaves before and after financial crises. Applying the methodology proposed by Davis and Haltiwanger (1992) for measuring job reallocation, we construct measures of credit reallocation across Korean firms in the 19812012 period. The credit boom preceding the 1997 financialcrisis featured a modest intensity of credit reallocation. By contrast,after the crisis and the associated reforms, credit reallocationsignificantly intensified and started to comove with the business cycle, while credit growth slowed down (deleveraging). The higher dynamism of the credit sector in reallocating liquidity cannot be explained by “flight to quality” episodes but reflectsa structural change in the credit reallocation process that has persisted since the end of the crisis. The intensification of credit reallocation appears to have been associated with enhanced allocative efficiency.The second chapter explores the evolution of credit reallocation across Korean nonfinancial firms for the period 19812012. I employ a dynamic latent factor model that decomposes regional credit reallocation rates into national, regionspecific and idiosyncratic components. I find that the common factor explaining common movement across 16 regional credit flows increased after the 1997 financial crisis. The common factor comoves withnational excess reallocation. It is positively and strongly correlated with national excess reallocation, while it is negatively correlated withnational net credit growth. It exhibits mild countercyclicality. I examine what extent the volatility of credit reallocation was driven bynational, regionspecific and idiosyncratic components. This study uncovers evidence that the national factor accounts for a sizable fraction of regional reallocation rates of total credit and loans, while it plays only a minor role in explaining the fluctuation in regional reallocation rates of bonds.The last chapter explores the relationship between religion and bank performance. The study uses data on credit unions in Korea for the period 2000 to 2007 to investigate the effects of religion on bank performance. The empirical results show that credit unions based on religious institutions not only suffer less from troubled loans but they also enjoy higher profits relative to ordinary ones. I find that the religious credit unions' unique features, such as non random potential clientele, rich soft information and reputational incentive to repay, are likely to be what enables them to outperform.
Show less
 Title
 Kernel methods for biosensing applications
 Creator
 Khan, Hassan Aqeel
 Date
 2015
 Collection
 Electronic Theses & Dissertations
 Description

This thesis examines the design noise robust information retrieval techniques basedon kernel methods. Algorithms are presented for two biosensing applications: (1)High throughput protein arrays and (2) Noninvasive respiratory signal estimation.Our primary objective in protein array design is to maximize the throughput byenabling detection of an extremely large number of protein targets while using aminimal number of receptor spots. This is accomplished by viewing the proteinarray as a...
Show moreThis thesis examines the design noise robust information retrieval techniques basedon kernel methods. Algorithms are presented for two biosensing applications: (1)High throughput protein arrays and (2) Noninvasive respiratory signal estimation.Our primary objective in protein array design is to maximize the throughput byenabling detection of an extremely large number of protein targets while using aminimal number of receptor spots. This is accomplished by viewing the proteinarray as a communication channel and evaluating its information transmission capacity as a function of its receptor probes. In this framework, the channel capacitycan be used as a tool to optimize probe design; the optimal probes being the onesthat maximize capacity. The information capacity is first evaluated for a small scaleprotein array, with only a few protein targets. We believe this is the first effort toevaluate the capacity of a protein array channel. For this purpose models of theproteomic channel's noise characteristics and receptor nonidealities, based on experimental prototypes, are constructed. Kernel methods are employed to extend thecapacity evaluation to larger sized protein arrays that can potentially have thousandsof distinct protein targets. A specially designed kernel which we call the ProteomicKernel is also proposed. This kernel incorporates knowledge about the biophysicsof target and receptor interactions into the cost function employed for evaluation of channel capacity.For respiratory estimation this thesis investigates estimation of breathingrateand lungvolume using multiple noninvasive sensors under motion artifact and highnoise conditions. A spirometer signal is used as the gold standard for evaluation oferrors. A novel algorithm called the segregated envelope and carrier (SEC) estimation is proposed. This algorithm approximates the spirometer signal by an amplitudemodulated signal and segregates the estimation of the frequency and amplitude information. Results demonstrate that this approach enables effective estimation ofboth breathing rate and lung volume. An adaptive algorithm based on a combination of Gini kernel machines and wavelet filltering is also proposed. This algorithm is titledthe waveletadaptive Gini (or WAGini) algorithm, it employs a novel wavelet transform based feature extraction frontend to classify the subject's underlying respiratorystate. This information is then employed to select the parameters of the adaptive kernel machine based on the subject's respiratory state. Results demonstrate significantimprovement in breathing rate estimation when compared to traditional respiratoryestimation techniques.
Show less
 Title
 Statistical properties of some almost Anosov systems
 Creator
 Zhang, Xu
 Date
 2016
 Collection
 Electronic Theses & Dissertations
 Description

We investigate the polynomial lower and upper bounds for decay of correlations of a class of twodimensional almost Anosov diffeomorphisms with respect to their SinaiRuelleBowen measures (SRB measures), where the almost Anosov diffeomorphism is a system which is hyperbolic everywhere except for one point. At the indifferent fixed point, the Jacobian matrix is an identity matrix. The degrees of the bounds are determined by the expansion and contraction rates as the orbits approach the...
Show moreWe investigate the polynomial lower and upper bounds for decay of correlations of a class of twodimensional almost Anosov diffeomorphisms with respect to their SinaiRuelleBowen measures (SRB measures), where the almost Anosov diffeomorphism is a system which is hyperbolic everywhere except for one point. At the indifferent fixed point, the Jacobian matrix is an identity matrix. The degrees of the bounds are determined by the expansion and contraction rates as the orbits approach the indifferent fixed point, and can be expressed by using coefficients of the third order terms in the Taylor expansions of the diffeomorphisms at the indifferent fixed points.We discuss the relationship between the existence of SRB measures and the differentia bility of some almost Anosov diffeomorphisms near the indifferent fixed points in dimensions bigger than one. The eigenvalue of Jacobian matrix at the indifferent fixed point along the onedimensional contraction subspace is less than one, while the other eigenvalues along the expansion subspaces are equal to one. As a consequence, there are twicedifferentiable al most Anosov diffeomorphisms that admit infinite SRB measures in two or threedimensional spaces; there exist twicedifferentiable almost Anosov diffeomorphisms with SRB measures in dimensions bigger than three. Further, we obtain the polynomial lower and upper bounds for the correlation functions of these almost Anosov maps that admit SRB measures.
Show less
 Title
 A containerattachable inertial sensor for realtime hydration tracking
 Creator
 Griffith, Henry
 Date
 2019
 Collection
 Electronic Theses & Dissertations
 Description

The underconsumption of fluid is associated with multiple adverse health outcomes, including reduced cognitive function, obesity, and cancer. To aid individuals in maintaining adequate hydration, numerous sensing architectures for tracking fluid intake have been proposed. Amongst the various approaches considered, containerattachable inertial sensors offer a nonwearable solution capable of estimating aggregate consumption across multiple drinking containers. The research described herein...
Show moreThe underconsumption of fluid is associated with multiple adverse health outcomes, including reduced cognitive function, obesity, and cancer. To aid individuals in maintaining adequate hydration, numerous sensing architectures for tracking fluid intake have been proposed. Amongst the various approaches considered, containerattachable inertial sensors offer a nonwearable solution capable of estimating aggregate consumption across multiple drinking containers. The research described herein demonstrates techniques for improving the performance of these devices.A novel sip detection algorithm designed to accommodate the variable duration and sparse occurrence of drinking events is presented at the beginning of this dissertation. The proposed technique identifies drinks using a twostage segmentation and classification framework. Segmentation is performed using a dynamic partitioning algorithm which spots the characteristic inclination pattern of the container during drinking. Candidate drinks are then distinguished from handling activities with similar motion patterns using a support vector machine classifier. The algorithm is demonstrated to improve true positive detection rate from 75.1% to 98.8% versus a benchmark approach employing static segmentation. Multiple strategies for improving drink volume estimation performance are demonstrated in the latter portion of this dissertation. Proposed techniques are verified through a largescale data collection consisting of 1,908 drinks consumed by 84 individuals over 159 trials. Support vector machine regression models are shown to improve perdrink estimation accuracy versus the prior stateoftheart for a single inertial sensor, with mean absolute percentage error reduced by 11.1%. Aggregate consumption accuracy is also improved versus previously reported results for a containerattachable device.An approach for computing aggregate consumption using fill level estimates is also demonstrated. Fill level estimates are shown to exhibit superior accuracy with reduced intersubject variance versus volume models. A heuristic fusion technique for further improving these estimates is also introduced herein. Heuristic fusion is shown to reduce root mean square error versus direct estimates by over 30%. The dissertation concludes by demonstrating the ability of the sensor to operate across multiple containers.
Show less
 Title
 Optimal design problems in thinfilm and diffractive optics
 Creator
 Wang, Yuliang
 Date
 2013
 Collection
 Electronic Theses & Dissertations
 Description

Optical components built from thinfilm layered structures are technologically very important. Applications include but are not limited to energy conversion and conservation, data transmission and conversion, space technology, imaging and so on. In practice these structures are defined by various parameters such as the refractiveindex profile, the layer thickness and the period. The problem to find the combination of parameters which yield the spectral response closest to a given target...
Show moreOptical components built from thinfilm layered structures are technologically very important. Applications include but are not limited to energy conversion and conservation, data transmission and conversion, space technology, imaging and so on. In practice these structures are defined by various parameters such as the refractiveindex profile, the layer thickness and the period. The problem to find the combination of parameters which yield the spectral response closest to a given target function is referred to as optimal design. This dissertation considers several topics in the mathematical modeling and optimal design of these structures through numerical optimization. A key step in numerical optimization is to define an objective function the measures the discrepancy between the target performance and that of the current solution. Our first topic is the impact of the objective function, its metric in particular, on the optimal solution and its performance. This is done by numerical experiments with different types of antireflection coatings using twomaterial multilayers. The results confirm existing statements and provide a few new findings, e.g. some specific metrics can yield particularly better solutions than others. Rugates are optical coatings presenting continuous refractiveindex profiles. They have received much attention recently due to technological advances and their potential better optical performance and environmental properties. The Fourier transform method is a widely used technique for the design of rugates. However, it is based on approximate expressions with strict assumptions and has many practical limitations. Our second topic is the optimal design of rugates through numerical optimization of objective functions with penalty terms. We found solutions with similar performance and novel solutions by using different metrics in the penalty term. Existing methods used only local basis functions such as piecewise constant or linear functions for the discretization of the refractiveindex profile. Our third topic is the use global basis functions such as sinusoidal functions in the discretization. A simple transformation is used to overcome the difficulty of bound constraints and the result is very promising. Both multilayer and rugate coatings can be obtained using this method. Diffraction gratings are thinfilm structures whose optical properties vary periodically along one or two directions. Our final topic is the optimal design of such structures in the broadband case. The objective functions and their gradient are obtained by solving variational problems and their adjoints with finite element method. Interesting phenomena are observed in the numerical experiments. Limitations and future work in this direction are pointed out.
Show less
 Title
 Variable selection in varying multiindex coefficient models with applications to geneenvironmental interactions
 Creator
 Guan, Shunjie
 Date
 2017
 Collection
 Electronic Theses & Dissertations
 Description

Variable selection is an important topic in modern statistics literature. And varying multiindex coefficient model(VMICM) is a promising tool to study the synergistic interaction effects between genes and multiple environmental exposures. In this dissertation, we proposed a variable selection approach for VMICM, we also generalized such approach to generalized and quantile regression settings. Their theoretical properties, simulation performance and application in genetic research were...
Show moreVariable selection is an important topic in modern statistics literature. And varying multiindex coefficient model(VMICM) is a promising tool to study the synergistic interaction effects between genes and multiple environmental exposures. In this dissertation, we proposed a variable selection approach for VMICM, we also generalized such approach to generalized and quantile regression settings. Their theoretical properties, simulation performance and application in genetic research were studied.Complicated diseases have both environmental and genetic risk factors, and large amount of research have been devoted to identify geneenvironment (G×E) interaction. Defined as different effect of a genotype on disease risk in persons with different environmental exposures (Ottman (1996)), we can view environmental exposures as the modulating factors in the effect of a gene. Based on this idea, we derived a three stage variable selection approach to estimate different effects of gene variables: varying, constant and zero which respectively correspond to nonlinear G$\times$E effect, no G$\times$E effect and no genetic effect. For multiple environmental exposure variables, we also select and estimate important environmental variables that contribute to the synergistic interaction effect. We theoretically evaluated the oracle property of the three step estimation method. We conducted simulation studies to further evaluate the finite sample performance of the method, considering both continuous and discrete predictors. Application to a real data set demonstrated the utility of the method.In Chapter 3, we generalized such variable selection approach to binary response setting. Instead of minimizing penalized squared error loss, we chose to maximize penalized loglikelihood function. We also theoretically evaluated the oracle property of the proposed selection approach in binary response setting. We demonstrated the performance of the model via simulation. At last, we applied our model to a Type II diabetes data set.Compared to conditional mean regression, conditional quantile regression could provide a more comprehensive understanding of the distribution of the response variable at different quantile. Even if the center of distribution is our only interest, median regression (special case of quantile regression) could offer a more robust estimator. Hence, we extended our three stage variable selection approach to a quantile regression setting in Chapter 4. We demonstrated the finite sample performance of the model via extensive simulation. And we applied our model to a birth weight data set.
Show less