You are here
Search results
(1  20 of 96)
Pages
 Title
 Model building incorporating discrimination between rival mathematical models in heat transfer
 Creator
 Van Fossen, Gerald James, 1942
 Date
 1973
 Collection
 Electronic Theses & Dissertations
 Title
 Constitutive modeling of the thermal response of rubberlike materials
 Creator
 Wang, Yuhui
 Date
 2006
 Collection
 Electronic Theses & Dissertations
 Title
 Multiscale modeling and estimation of Poisson processes with applications to emission computed tomography
 Creator
 Timmermann, Klaus Edmond
 Date
 2000
 Collection
 Electronic Theses & Dissertations
 Title
 The magnetic susceptibility of oxygen and nitric oxide at low field strength
 Creator
 Burris, Albert
 Date
 1943
 Collection
 Electronic Theses & Dissertations
 Title
 Modeling spatial and temporal variations in tourismrelated employment in Michigan
 Creator
 Chen, SzReng
 Date
 1988
 Collection
 Electronic Theses & Dissertations
 Title
 Modeling municipal expenditures in Michigan
 Creator
 Selim, Jahangir
 Date
 1988
 Collection
 Electronic Theses & Dissertations
 Title
 A comparison of forecasting accuracy of several quantitative forecasting methods : application to lodging sales tax and use tax collections in Michigan
 Creator
 Kim, Jong Ho
 Date
 1994
 Collection
 Electronic Theses & Dissertations
 Title
 Field machinery system modeling and requirements for selected Michigan cash crop production systems
 Creator
 Singh, Devindar
 Date
 1978
 Collection
 Electronic Theses & Dissertations
 Title
 A computerized sensitivity analysis of selected assumptions in an annual econometric model of the U.S. economy : the electric KleinGoldberger
 Creator
 Havenner, Arthur Melvin, 1943
 Date
 1973
 Collection
 Electronic Theses & Dissertations
 Title
 Illposed problems in optimal control systems and a method to solve them
 Creator
 HedayatolahTabrizi, Lili
 Date
 1983
 Collection
 Electronic Theses & Dissertations
 Title
 Multichix, a computer model that projects receipts and expenses for egg production enterprises
 Creator
 Jacobs, Roger Dean
 Date
 1978
 Collection
 Electronic Theses & Dissertations
 Title
 Mathematical modeling and computation of the optical response from nanostructures
 Creator
 Sun, Yuanchang
 Date
 2009
 Collection
 Electronic Theses & Dissertations
 Title
 Highorder computerassisted estimates of topological entropy
 Creator
 Grote, Johannes
 Date
 2009
 Collection
 Electronic Theses & Dissertations
 Title
 Transport properties of random Schrödinger operators on correlated environments
 Creator
 Bezerra de Matos, Rodrigo
 Date
 2020
 Collection
 Electronic Theses & Dissertations
 Description

This Ph.D. thesis presents recent developments in the theory of random Schrodinger operators. Differently from what is often studied in the subject, our main results consider potentials which are not independent at distinct sites but, rather, display some form of long range correlation. These are natural objects to investigate if one wishes to understand the long term behavior of a single particle which evolves in a disordered environment but also interacts with different members of this...
Show moreThis Ph.D. thesis presents recent developments in the theory of random Schrodinger operators. Differently from what is often studied in the subject, our main results consider potentials which are not independent at distinct sites but, rather, display some form of long range correlation. These are natural objects to investigate if one wishes to understand the long term behavior of a single particle which evolves in a disordered environment but also interacts with different members of this environment (other particles, spins, etc). In chapter 2 it is shown that, within the HartreeFock approximation for the disordered Hubbard Hamiltonian, weakly interacting Fermions at positive temperature exhibit localization, suitably defined as exponential decay of eigenfunction correlators. Our result holds in any dimension in the regime of large disorder and at any disorder in the one dimensional case. As a consequence of our methods, we are able to show Holder continuity of the integrated density of states with respect to energy, disorder and interaction using known techniques. This is based on joint work with Jeffrey Schenker. Chapter 3 is based on joint work with Jeffrey Schenker and Rajinder Mavi. There we present simple, physically motivated, examples where small geometric changes on a twodimensional graph G, combined with high disorder, have a significant impact on the spectral and dynamical properties of the random Schrodinger operator AG+V[omega] obtained by adding a random potential to the graph's adjacency operator. Differently from the standard Anderson model, the random potential will be constant along vertical line, hence the models exhibit long range correlations. Moreover, one of the models presented here is a natural example where the transient and recurrent components of the absolutely continuous spectrum, introduced by Avron and Simon in [9] coexist and allow us to capture a sharp phase transition present in the system.
Show less
 Title
 Comparison of methods for detecting violations of measurement invariance with continuous construct indicators using latent variable modeling
 Creator
 Zhang, Mingcai (Graduate of Michigan State University)
 Date
 2020
 Collection
 Electronic Theses & Dissertations
 Description

Measurement invariance (MI) refers to the fact that the measurement instrument measures the same concept in the same way in two or more groups. However, in educational and psychological testing practice, the assumption of MI is often violated due to the contamination by possible noninvariance in the measurement models. In the framework of Latent Variable Modeling (LVM), methodologists have developed different statistical methods to identify the noninvariant components. Among these methods,...
Show moreMeasurement invariance (MI) refers to the fact that the measurement instrument measures the same concept in the same way in two or more groups. However, in educational and psychological testing practice, the assumption of MI is often violated due to the contamination by possible noninvariance in the measurement models. In the framework of Latent Variable Modeling (LVM), methodologists have developed different statistical methods to identify the noninvariant components. Among these methods, the free baseline method (FR) is popularly employed, but this method is limited due to the necessity of choosing a truly invariant reference indicator (RI). Two other methods, namely, the BenjaminiHochberg method (BH) and the alignment method (AM) are exempt from the RI setting. The BH method applies the false discovery rate (FDR) procedure. The AM method aims to optimize the model estimates under the assumption of approximate invariance. The purpose of the present study is to address the problem of RI setting by comparing the BH method and the AM method with the traditional free baseline method through both a simulation study and an empirical data analysis. More specifically, the simulation study is designed to investigate the performances of the three methods through varying the sample sizes and the characteristics of noninvariance embedded in the measurement models. The characteristics of noninvariance are distinguished as the location of noninvariant parameters, the degree of noninvariant parameters, and the magnitude of model noninvariance. The performances of these three methods are also compared on an empirical dataset (Openness for Problem Solving Scale in PISA 2012) that is obtained from three countries (ShanghaiChina, Australia, and the United States).The simulation study finds that the wrong RI choice heavily impacts the FR method, which produces high type I error rates and low statistical power rates. Both the BH method and the AM method perform better than the FR method in this setting. Comparatively speaking, the benefit of the BH method is that it performs the best by achieving high powers for detecting noninvariance. The power rate increases with lowering the magnitude of model noninvariance, and with increasing sample size and degree of noninvariance. The AM method performs the best with respect to type I errors. The type I error rates estimated by the AM method are low under all simulation conditions. In the empirical study, both the BH method and the AM method perform similarly in estimating the invariance/noninvariance patterns among the three country pairs. However, the FR method, for which the RI is the first item by default, recovers a different invariance/noninvariance pattern. The results can help the methodologists gain a better understanding of the potential advantages of the BH method and the AM method over the traditional FR method. The study results also highlight the importance of correctly specifying the model noninvariance at the indicator level. Based on the characteristics of the noninvariant components, practitioners may consider deleting/modifying the noninvariant indicators or free the noninvariant components while building partial invariant models in order to improve the quality of crossgroup comparisons.
Show less
 Title
 Modeling galactic chemical evolution in cosmological simulations
 Creator
 Peruta, Carolyn Cynthia
 Date
 2013
 Collection
 Electronic Theses & Dissertations
 Description

The most fundamental challenges to models of galactic chemical evolution (GCE) are uncertainties in the basic inputs, including the properties of the stellar initial mass function (IMF), stellar nucleosynthetic yields, and the rate of return of mass and energy to the interstellar and intergalactic medium by Type Ia and II supernovae and stellar winds. In this dissertation, we provide a critical examination of widely available stellar nucleosynthetic yield data, with an eye toward modeling GCE...
Show moreThe most fundamental challenges to models of galactic chemical evolution (GCE) are uncertainties in the basic inputs, including the properties of the stellar initial mass function (IMF), stellar nucleosynthetic yields, and the rate of return of mass and energy to the interstellar and intergalactic medium by Type Ia and II supernovae and stellar winds. In this dissertation, we provide a critical examination of widely available stellar nucleosynthetic yield data, with an eye toward modeling GCE in the broad scope of cosmological hydrodynamical simulations. We examine the implications of uncertain inputs for the Galactic stellar IMF, and nucleosynthetic yields from stellarevolution calculations, on our ability to ask detailed questions regarding the observed Galactic chemicalabundance patterns. We find a marked need for stellar feedback data from stars of initial mass 8 to 12 M_{sun} and above 40 M_{sun}, and for initial stellar metallicities above and below solar, Z_{sun}=0.02. We find the largest discrepancies amongst nucleosynthetic yield calculations are due to various groups' treatment of hot bottom burning, formation of the 13C pocket in asymptotic giantbranch (AGB) stars, and details of mass loss, rotation, and convection in all stars. Our model of GCE is used to postprocess simulations to explore in greater detail the nucleosynthetic evolution of the stellar populations and interstellar/intergalactic medium, and to compare directly to the chemical abundances of the Milky Way stellar halo and dwarf spheroidal galaxy stellar populations.
Show less
 Title
 Optimal design problems in thinfilm and diffractive optics
 Creator
 Wang, Yuliang
 Date
 2013
 Collection
 Electronic Theses & Dissertations
 Description

Optical components built from thinfilm layered structures are technologically very important. Applications include but are not limited to energy conversion and conservation, data transmission and conversion, space technology, imaging and so on. In practice these structures are defined by various parameters such as the refractiveindex profile, the layer thickness and the period. The problem to find the combination of parameters which yield the spectral response closest to a given target...
Show moreOptical components built from thinfilm layered structures are technologically very important. Applications include but are not limited to energy conversion and conservation, data transmission and conversion, space technology, imaging and so on. In practice these structures are defined by various parameters such as the refractiveindex profile, the layer thickness and the period. The problem to find the combination of parameters which yield the spectral response closest to a given target function is referred to as optimal design. This dissertation considers several topics in the mathematical modeling and optimal design of these structures through numerical optimization. A key step in numerical optimization is to define an objective function the measures the discrepancy between the target performance and that of the current solution. Our first topic is the impact of the objective function, its metric in particular, on the optimal solution and its performance. This is done by numerical experiments with different types of antireflection coatings using twomaterial multilayers. The results confirm existing statements and provide a few new findings, e.g. some specific metrics can yield particularly better solutions than others. Rugates are optical coatings presenting continuous refractiveindex profiles. They have received much attention recently due to technological advances and their potential better optical performance and environmental properties. The Fourier transform method is a widely used technique for the design of rugates. However, it is based on approximate expressions with strict assumptions and has many practical limitations. Our second topic is the optimal design of rugates through numerical optimization of objective functions with penalty terms. We found solutions with similar performance and novel solutions by using different metrics in the penalty term. Existing methods used only local basis functions such as piecewise constant or linear functions for the discretization of the refractiveindex profile. Our third topic is the use global basis functions such as sinusoidal functions in the discretization. A simple transformation is used to overcome the difficulty of bound constraints and the result is very promising. Both multilayer and rugate coatings can be obtained using this method. Diffraction gratings are thinfilm structures whose optical properties vary periodically along one or two directions. Our final topic is the optimal design of such structures in the broadband case. The objective functions and their gradient are obtained by solving variational problems and their adjoints with finite element method. Interesting phenomena are observed in the numerical experiments. Limitations and future work in this direction are pointed out.
Show less
 Title
 Fast solver for large scale eddy current nondestructive evaluation problems
 Creator
 Lei, Naiguang
 Date
 2014
 Collection
 Electronic Theses & Dissertations
 Description

Eddy current testing plays a very important role in nondestructive evaluations of conducting test samples. Based on Faraday's law, an alternating magnetic field source generates induced currents, called eddy currents, in an electrically conducting test specimen. The eddy currents generate induced magnetic fields that oppose the direction of the inducing magnetic field in accordance with Lenz's law. In the presence of discontinuities in material property or defects in the test specimen, the...
Show moreEddy current testing plays a very important role in nondestructive evaluations of conducting test samples. Based on Faraday's law, an alternating magnetic field source generates induced currents, called eddy currents, in an electrically conducting test specimen. The eddy currents generate induced magnetic fields that oppose the direction of the inducing magnetic field in accordance with Lenz's law. In the presence of discontinuities in material property or defects in the test specimen, the induced eddy current paths are perturbed and the associated magnetic fields can be detected by coils or magnetic field sensors, such as Hall elements or magnetoresistance sensors. Due to the complexity of the test specimen and the inspection environments, the availability of theoretical simulation models is extremely valuable for studying the basic field/flaw interactions in order to obtain a fuller understanding of nondestructive testing phenomena.Theoretical models of the forward problem are also useful for training and validation of automated defect detection systems. Theoretical models generate defect signatures that are expensive to replicate experimentally. In general, modelling methods can be classified into two categories: analytical and numerical. Although analytical approaches offer closed form solution, it is generally not possible to obtain largely due to the complex sample and defect geometries, especially in threedimensional space. Numerical modelling has become popular with advances in computer technology and computational methods. However, due to the huge time consumption in the case of large scale problems, accelerations/fast solvers are needed to enhance numerical models.This dissertation describes a numerical simulation model for eddy current problems using finite element analysis. Validation of the accuracy of this model is demonstrated via comparison with experimental measurements of steam generator tube wall defects. These simulations generating twodimension raster scan data typically takes one to two days on a dedicated eightcore PC. A novel direct integral solver for eddy current problems and GPUbased implementation is also investigated in this research to reduce the computational time.
Show less
 Title
 Near duplicate image search
 Creator
 Li, Fengjie
 Date
 2014
 Collection
 Electronic Theses & Dissertations
 Description

Information retrieval addresses the fundamental problem of how to identify the objects from database that satisfies the information needs of users. Facing the information overload, the major challenge in search algorithm design is to ensure that useful information can be found both accurately and efficiently from large databases.To address this challenge, different indexing and retrieval methods had been proposed for different types of data, namely sparse data (e.g. documents), dense data (e...
Show moreInformation retrieval addresses the fundamental problem of how to identify the objects from database that satisfies the information needs of users. Facing the information overload, the major challenge in search algorithm design is to ensure that useful information can be found both accurately and efficiently from large databases.To address this challenge, different indexing and retrieval methods had been proposed for different types of data, namely sparse data (e.g. documents), dense data (e.g. dense feature vectors) and bagoffeatures (e.g. local feature represented images). For sparse data, inverted index and document retrieval models had been proved to be very effective for large scale retrieval problems. For dense data and bagoffeature data, however, there are still some open problems. For example, Locality Sensitive Hashing, a stateoftheart method for searching high dimensional vectors, often fails to make a good tradeoff between precision and recall. Namely, it tends to achieve high preci sion but with low recall or vice versa. The bagofwords model, a popular approach for searching objects represented bagoffeatures, has a limited performance because of the information loss during the quantization procedure.Since the general problem of searching objects represented in dense vectors and bagoffeatures may be too challenging, in this dissertation, we focus on nearly duplicate search, in which the matched objects is almost identical to the query. By effectively exploring the statistical proper ties of near duplicities, we will be able to design more effective indexing schemes and search algorithms. Thus, the focus of this dissertation is to design new indexing methods and retrieval algorithms, for near duplicate search in large scale databases, that accurately capture the data simi larity and delivers more accurate and efficient search. Below, we summarize the main contributions of this dissertation:Our first contribution is a new algorithm for searching near duplicate bagoffeatures data. The proposed algorithm, named random seeding quantization, is more efficient in generating bagof words representations for near duplicate images. The new scheme is motivated by approximating the optimal partial matching between bagoffeatures, and thus produces a bagofwords representation capturing the true similarities of the data, leading to more accurate and efficient retrieval of bagoffeatures data.Our second contribution, termed Random Projection Filtering, is a search algorithm designed for efficient near duplicate vector search. By explicitly exploiting the statistical properties of near duplicity, the algorithm projects high dimensional vectors into lower dimensional space and filter out irrelevant items. Our effective filtering procedure makes RPF more accurate and efficient to identify nearly duplicate objects in databases.Our third contribution is to develop and evaluate a new randomized range search algorithm for near duplicate vectors in high dimensional spaces, termed as Random Projection Search. Different from RPF, the algorithm presented in this chapter is suitable for a wider range of applications be cause it does not require the sparsity constrains for high search accuracy. The key idea is to project both the data points and the query point into an one dimensional space by a random projection, and perform one dimensional range search to find the subset of data points that are within the range of a given query using binary search. We prove the theoretical guarantee for the proposed algorithm and evaluate its empirical performance on a dataset of 1.1 billion image features.
Show less
 Title
 Credit markets, financial crises, and the macroeconomy
 Creator
 Hyun, Junghwan
 Date
 2014
 Collection
 Electronic Theses & Dissertations
 Description

This study consists of three chapters, each of which is an individual paper. The first chapter investigates how the dynamic process of reallocation of credit across firms behaves before and after financial crises. Applying the methodology proposed by Davis and Haltiwanger (1992) for measuring job reallocation, we construct measures of credit reallocation across Korean firms in the 19812012 period. The credit boom preceding the 1997 financialcrisis featured a modest intensity of credit...
Show moreThis study consists of three chapters, each of which is an individual paper. The first chapter investigates how the dynamic process of reallocation of credit across firms behaves before and after financial crises. Applying the methodology proposed by Davis and Haltiwanger (1992) for measuring job reallocation, we construct measures of credit reallocation across Korean firms in the 19812012 period. The credit boom preceding the 1997 financialcrisis featured a modest intensity of credit reallocation. By contrast,after the crisis and the associated reforms, credit reallocationsignificantly intensified and started to comove with the business cycle, while credit growth slowed down (deleveraging). The higher dynamism of the credit sector in reallocating liquidity cannot be explained by “flight to quality” episodes but reflectsa structural change in the credit reallocation process that has persisted since the end of the crisis. The intensification of credit reallocation appears to have been associated with enhanced allocative efficiency.The second chapter explores the evolution of credit reallocation across Korean nonfinancial firms for the period 19812012. I employ a dynamic latent factor model that decomposes regional credit reallocation rates into national, regionspecific and idiosyncratic components. I find that the common factor explaining common movement across 16 regional credit flows increased after the 1997 financial crisis. The common factor comoves withnational excess reallocation. It is positively and strongly correlated with national excess reallocation, while it is negatively correlated withnational net credit growth. It exhibits mild countercyclicality. I examine what extent the volatility of credit reallocation was driven bynational, regionspecific and idiosyncratic components. This study uncovers evidence that the national factor accounts for a sizable fraction of regional reallocation rates of total credit and loans, while it plays only a minor role in explaining the fluctuation in regional reallocation rates of bonds.The last chapter explores the relationship between religion and bank performance. The study uses data on credit unions in Korea for the period 2000 to 2007 to investigate the effects of religion on bank performance. The empirical results show that credit unions based on religious institutions not only suffer less from troubled loans but they also enjoy higher profits relative to ordinary ones. I find that the religious credit unions' unique features, such as non random potential clientele, rich soft information and reputational incentive to repay, are likely to be what enables them to outperform.
Show less