You are here
Search results
(21 - 40 of 1,890)
Pages
- Title
- Dynamical Systems Analysis Using Topological Signal Processing
- Creator
- Myers, Audun
- Date
- 2022
- Collection
- Electronic Theses & Dissertations
- Description
-
Topological Signal Processing (TSP) is the study of time series data through the lens of Topological Data Analysis (TDA)—a process of analyzing data through its shape. This work focuses on developing novel TSP tools for the analysis of dynamical systems. A dynamical system is a term used to broadly refer to a system whose state changes in time. These systems are formally assumed to be a continuum of states whose values are real numbers. However, real-life measurements of these systems only...
Show moreTopological Signal Processing (TSP) is the study of time series data through the lens of Topological Data Analysis (TDA)—a process of analyzing data through its shape. This work focuses on developing novel TSP tools for the analysis of dynamical systems. A dynamical system is a term used to broadly refer to a system whose state changes in time. These systems are formally assumed to be a continuum of states whose values are real numbers. However, real-life measurements of these systems only provide finite information from which the underlying dynamics must be gleaned. This necessitates making conclusions on the continuous structure of a dynamical system using noisy finite samples or time series. The interest often lies in capturing qualitative changes in the system’s behavior known as a bifurcation through changes in the shape of the state space as one or more of the system parameters vary. Current literature on time series analysis aims to study this structure by searching for a lower-dimensional representation; however, the need for user-defined inputs, the sensitivity of these inputs to noise, and the expensive computational effort limit the usability of available knowledge especially for in-situ signal processing.This research aims to use and develop TSP tools to extract useful information about the underlying dynamical system's structure. The first research direction investigates the use of sublevel set persistence—a form of persistent homology from TDA—for signal processing with applications including parameter estimation of a damped oscillator and signal complexity measures to detect bifurcations. The second research direction applies TDA to complex networks to investigate how the topology of such complex networks corresponds to the state space structure. We show how TSP applied to complex networks can be used to detect changes in signal complexity including chaotic compared to periodic dynamics in a noise-contaminated signal. The last research direction focuses on the topological analysis of dynamical networks. A dynamical network is a graph whose vertices and edges have state values driven by a highly interconnected dynamical system. We show how zigzag persistence—a modification of persistent homology—can be used to understand the changing structure of such dynamical networks.
Show less
- Title
- ASSESSING DISASTER MANAGEMENT EFFECTS ON RECOVERY OUTCOMES IN RURAL POST-DISASTER JAPAN
- Creator
- Ward, Kayleigh
- Date
- 2022
- Collection
- Electronic Theses & Dissertations
- Description
-
As a country frequented by natural disasters, Japan has robust disaster management systems that can be employed quickly to mitigate human, environmental, and economic harm and losses. However, these systems tend to be most effective when handling small-scale localized disasters. In the face of the 2011 Great East Japan Earthquake which decimated the northeastern communities of the Tohoku region, Japan’s disaster management system collapsed, unable to handle such large scale and widespread...
Show moreAs a country frequented by natural disasters, Japan has robust disaster management systems that can be employed quickly to mitigate human, environmental, and economic harm and losses. However, these systems tend to be most effective when handling small-scale localized disasters. In the face of the 2011 Great East Japan Earthquake which decimated the northeastern communities of the Tohoku region, Japan’s disaster management system collapsed, unable to handle such large scale and widespread damage. In the ten years since the disaster many rural communities have contended with a variety of social and economic problems, often left unremedied despite on-going government intervention. In this context, this dissertation will explore the complex problems in Minamisanriku, Miyagi—a rural coastal community decimated by the 2011 Great East Japan Earthquake. By engaging and collaborating with organizations in this community, I assess the connections between disaster management and post-disaster recovery outcomes through various applications of social capital and power. I first investigate how historical legacies of national government policies influenced recovery outcomes in the Tohoku region and how have these processes influenced economic restructuring and social development in Minamisanriku during reconstruction. Next, I consider how governance structures within Miyagi prefecture influenced the social and economic development of Minamisanriku during reconstruction. Lastly, I look to how disaster management affects the ability of residents to handle locally-identified and in turn, how residents utilize their social capital to driver social and economic recovery. I assess several key ideas on the connections between forms and theories of social capital and how they affect long-term disaster recovery outcomes through the disaster management process. The dissertation is situated to improve our understanding of how social capital affects rural communities’ ability to respond to these troubles and to craft context specific solutions to them. It also offers a variety of policy recommendations about how to improve community-centered recovery within disaster management frameworks.
Show less
- Title
- Supervised Dimension Reduction Techniques for High-Dimensional Data
- Creator
- Molho, Dylan
- Date
- 2022
- Collection
- Electronic Theses & Dissertations
- Description
-
The data sets arising in modern science and engineering are often extremely large, befitting the era of big data. But these data sets are not only large in the number of samples they have, they may also have a large number of features, placing each data point in a high-dimensional space.However, unique problems arise when the dimension of the data has the same or even greater order than the sample size. This scenario in statistics is known as the High Dimension, Low Sample Size problem (HDLSS...
Show moreThe data sets arising in modern science and engineering are often extremely large, befitting the era of big data. But these data sets are not only large in the number of samples they have, they may also have a large number of features, placing each data point in a high-dimensional space.However, unique problems arise when the dimension of the data has the same or even greater order than the sample size. This scenario in statistics is known as the High Dimension, Low Sample Size problem (HDLSS). In this paradigm, many standard statistical estimators are shown to perform sub-optimally and in some cases can not be computed at all. To overcome the barriers found in HDLSS scenarios, one must make additional assumptions on the data, either with explicit formulations or with implicit beliefs about the behavior of the data. The first type of research leads to structural assumptions placed on the probability model that generates the data, which allow for alterations to classical methods to yield theoretically optimal estimators for the chosen well-defined tasks. The second type of research, in contrast, makes general assumptions usually based on the the causal nature of chosen real-world data application, where the data is assumed to have dependencies between the parameters.This dissertation develops two novel algorithms that successfully operate in the paradigm of HDLSS. We first propose the Generalized Eigenvalue (GEV) estimator, a unified sparse projection regression framework for estimating generalized eigenvector problems.Unlike existing work, we reformulate a sequence of computationally intractable non-convex generalized Rayleigh quotient optimization problems into a computationally efficient simultaneous linear regression problem, padded with a sparse penalty to deal with high-dimensional predictors. We showcase the applications of our method by considering three iconic problems in statistics: the sliced inverse regression (SIR), linear discriminant analysis (LDA), and canonical correlation analysis (CCA). We show the reformulated linear regression problem is able to recover the same projection space obtained by the original generalized eigenvalue problem. Statistically, we establish the nonasymptotic error bounds for the proposed estimator in the applications of SIR and LDA, and prove these rates are minimax optimal. We present how the GEV is applied to the CCA problem, and adapt the method for a robust Huber-loss based formulation for noisy data. We test our framework on both synthetic and real datasets and demonstrate its superior performance compared with other state-of-the-art methods in high dimensional statistics. The second algorithm is the scJEGNN, a graphical neural network (GNN) tailored to the task of data integration for HDLSS single-cell sequencing data.We show that with its unique model, the GNN is able to leverage structural information of the biological data relations in order to perform a joint embedding of multiple modalities of single-cell gene expression data. The model is applied to data from the NeurIPS 2021 competition for Open Problems in Single-Cell Analysis, and we demonstrate that our model is able to outperform top teams from the joint embedding task.
Show less
- Title
- PRENATAL CANNABIS EXPOSURE AMONG PREGNANT PEOPLE IN TWO MICHIGAN SAMPLES
- Creator
- Vanderziel, Alyssa
- Date
- 2022
- Collection
- Electronic Theses & Dissertations
- Description
-
This dissertation will address three study aims: Aim 1 will estimate the size of a suspected causal influence of prenatal cannabis exposure on a set of inter-related birth outcomes: birth size, gestational age, 5-minute Apgar score, and neonatal intensive care unit admission. Aim 2 will investigate the degree to which morning sickness might be associated with higher odds of cannabis use. Aim 3 is to conduct a feasibility study to assess the recruitment and retention of pregnant people who...
Show moreThis dissertation will address three study aims: Aim 1 will estimate the size of a suspected causal influence of prenatal cannabis exposure on a set of inter-related birth outcomes: birth size, gestational age, 5-minute Apgar score, and neonatal intensive care unit admission. Aim 2 will investigate the degree to which morning sickness might be associated with higher odds of cannabis use. Aim 3 is to conduct a feasibility study to assess the recruitment and retention of pregnant people who regularly use cannabis, measured by willingness to participate and complete the study survey; willingness to provide urine samples; the percentage of participants who are cannabis-only users; and the percentage of pregnant people retained for the three follow-up assessments. Aims 1 and 2 use data for the Michigan Archive for Research on Child Health, a prospective cohort of pregnant people recruited from 11 sites across Michigan between 2017 and 2021. Aim 1 and Aim 2 analytic sample sizes are n= 584 and n= 826, respectively. Results of Aim 1 suggest a modest but statistically significant association between prenatal cannabis exposure and birth size z-score after model adjustment for potential confounding variables (betamodel4= -0.3; 95% CI: -0.5, -0.003). Results of Aim 2 suggest higher odds of prenatal cannabis use with increasing morning sickness severity (ORmodel4= 1.2; 95% CI: 1.1, 1.2). Sensitivity analyses indicate higher odds of using cannabis during the first trimester with increasing morning sickness severity (ORmodel4= 1.1; 95% CI: 1.01, 1.2). Similarly, findings indicate higher odds of cannabis use in the second or third trimester of pregnancy with increasing morning sickness severity (ORmodel4= 1.2; 95% CI: 1.1, 1.4). Sensitivity analyses also suggest an association between pre-pregnancy and prenatal cannabis use and morning sickness severity (betamodel4= 0.1; 95% CI: 0.003, 0.2 and betamodel4= 0.2; 95% CI: 0.1, 0.2, respectively). For Aim 3, Cannabis Legalization in Michigan-Maternal & Infant Health, a prospective feasibility study, was designed to better understand the recruitment and retention of pregnant people who regularly use cannabis. The study recruited n= 77 baseline participants of which n= 15 were prospectively followed and assessed during each trimester of pregnancy and once post-delivery. Of the participants recruited at baseline, 42% reported using cannabis during pregnancy, of which 87% were cannabis-only users (i.e., no reported polysubstance use). All prospective participants were willing to provide urine samples; the concordance between self-reported cannabis use and urinalysis was 100% in the first and second trimesters and 92% in the third trimester of pregnancy. Study retention of the prospective sample was 80%; of n= 15 first trimester participants, n= 3 were loss-to-follow-up. Of the remaining 12 participants, 83% had complete data across all four timepoints.Findings from this dissertation reveal that pregnant people are willing to participate in a study that explores the health effects of prenatal cannabis use on birth outcomes and maternal health. Larger studies are warranted to assess the association between prenatal cannabis exposure and fetal growth and development, as well as the relationship between morning sickness and cannabis use. This dissertation also detected an association between prenatal cannabis exposure and lower birth size, suggesting that pregnant people, or people contemplating pregnancy, should be cautioned against using cannabis until more studies are conducted to establish causality between prenatal cannabis use and neonatal health.
Show less
- Title
- SCATTERING AMPLITUDES FOR ZZ PRODUCTION AT THE LHC AND TOP-QUARK MASS EFFECTS
- Creator
- Agarwal, Bakul
- Date
- 2022
- Collection
- Electronic Theses & Dissertations
- Description
-
With the Large Hadron Collider providing experimental data with unprecedented precision, theoretical predictions must improve similarly to keep up.Among a plethora of processes being studied at the LHC, the production of a pair of vector bosons is of particular importance. Consequently, precise theoretical predictions for these processes are necessary. This thesis discusses primarily the calculation of ZZ production through gluon fusion at 2-loops with full top-quark mass dependence as well...
Show moreWith the Large Hadron Collider providing experimental data with unprecedented precision, theoretical predictions must improve similarly to keep up.Among a plethora of processes being studied at the LHC, the production of a pair of vector bosons is of particular importance. Consequently, precise theoretical predictions for these processes are necessary. This thesis discusses primarily the calculation of ZZ production through gluon fusion at 2-loops with full top-quark mass dependence as well as the technological improvements required to successfully perform the calculation. Also discussed briefly is the quark initiated production of $\gamma\gamma + \text{jet}$ at 2-loops where some of these technologies allowed to overcome prior bottlenecks in the calculation of the helicity amplitudes.The 2-loop corrections for ZZ production through massless quarks had been known; in this work, the 2-loop corrections through the massive top quark are calculated .To achieve this, a new algorithm to systematically construct linear combinations of integrals with a convergent parametric integral representation is developed. This algorithm finds linear combinations of general integrals with numerators, dots, and dimension shifts as well as integrals from subsectors.To express the amplitudes in terms of these integrals, Integration-By-Parts (IBP) reduction is performed making use of syzygies and finite field based methods.A new algorithm is employed to construct these syzygies using linear algebra. The IBP reductions for $gg\rightarrow ZZ$ are successfully performed using these techniques. Further improvements, including predetermining the structure of the coefficients in IBP reductions, are used to successfully perform the reductions for $\gamma\gamma + jet$. Multivariate partial fractioning is used to simplify the final expressions to more manageable forms and render them suitable for fast numerical evaluation.%\thispagestyle{empty}In the case of $gg\rightarrow ZZ$, due to the presence of structures beyond polylogarithms, sector decomposition is employed to numerically evaluate the finite master integrals.Evaluating the amplitudes, agreement is found with previously calculated expansions specifically in the limit of large and small top mass. Improved results are presented for scattering at intermediate energies and/or for non-central scattering angles. With this calculation, the last building block required for the calculation of the full NLO cross-section for $gg\rightarrow ZZ$ is known.
Show less
- Title
- THE EFFECT OF CLAW REMOVAL METHODS AND TEMPERATURE ON THE POST-RELEASE SURVIVAL AND CRITICAL THERMAL MAXIMUM OF STONE CRAB (MENIPPE MERCENARIA)
- Creator
- Walus, Alexandria Marie
- Date
- 2022
- Collection
- Electronic Theses & Dissertations
- Description
-
Florida stone crab (Menippe mercenaria) is an emerging commercial fishery in the Bahamas with its main export to the United States of America. This fishery capitalizes on the oversized claws of the crab, which are harvested, before the crab is returned to the sea where it can potentially regrow its claws. While it is often assumed that the crab will regrow its claws and re-enter the fishery, only 13% of harvested crabs in the fishery have regrown claws, and an estimated 2-81% of crabs survive...
Show moreFlorida stone crab (Menippe mercenaria) is an emerging commercial fishery in the Bahamas with its main export to the United States of America. This fishery capitalizes on the oversized claws of the crab, which are harvested, before the crab is returned to the sea where it can potentially regrow its claws. While it is often assumed that the crab will regrow its claws and re-enter the fishery, only 13% of harvested crabs in the fishery have regrown claws, and an estimated 2-81% of crabs survive post-claw removal and release. In addition, the Caribbean region is considered one of the most vulnerable areas with respect to climate change. Therefore, because most aquatic organisms cannot regulate their body temperature, they are directly influenced by environmental temperature stress, and when combined with the stress of claw removal may further decrease the capacity of the crab to survive warming temperatures. The purpose of my thesis was to: 1) determine a method of claw removal that maximizes survival for stone crab, 2) determine the effect of rapidly warming water temperatures on the reflex behavior of crabs post-release, and 3) determine the effect of claw removal on the critical thermal maximum (CTMax) of stone crab. In chapter 1, I used a controlled laboratory experiment to compare a new autotomy-inducing technique to the typical method of claw removal. For the two different claw removal methods, I compared the survival and start time for claw regeneration as a function of harvester experience and whether one claw or both claws were removed. Finally, the claws that were removed were inspected by independent observers to determine whether any differences using the two methods could be identified. Overall, I found that crabs with claws removed using the proposed induced autotomy method had significantly higher survival than crabs with claws removed using the typical method, while crabs with claws removed by inexperienced harvesters had the lowest. In Chapter 2, I conducted a series of laboratory experiments to measure nine reflex action mortality predictor reflexes of crabs and determine the critical thermal maximum (CTMax) for stone crab that had one or two claws removed using induced-autotomy. Of the nine reflexes used to develop an endpoint necessary for calculating CTMax, three were determined to be suitable (equilibrium, mouth closure and appendage turgor) and CTMax was 37.6 °C, and independent of the number of claws removed using the proposed induced-autotomy method. Overall, the tool required to conduct the induced-autotomy method of claw removal is simple and easily purchased or constructed and can easily be taught to recreational and commercial harvesters as a way to improve survival and thus sustainability of this important fishery.
Show less
- Title
- PRETERM DELIVERY AND ITS ASSOCIATION WITH FALSE POSITIVE, AUDITORY BRAINSTEM RESPONSE (ABR)-BASED NEWBORN HEARING SCREENING FINDINGS
- Creator
- Rathore, Mandavni
- Date
- 2022
- Collection
- Electronic Theses & Dissertations
- Description
-
Newborn hearing screening failure can occur in infants without hearing loss; these false-positive (FP) results have been speculated to reflect neurodevelopmental disorder risk. Preterm birth (PTB), a known neurodevelopmental risk factor, has been associated with FP at initial screening. We aim to further characterize this association by stratifying PTB by gestational age and delivery circumstance. To do this, we analyzed birth certificate and Early Hearing Detection & Intervention data from...
Show moreNewborn hearing screening failure can occur in infants without hearing loss; these false-positive (FP) results have been speculated to reflect neurodevelopmental disorder risk. Preterm birth (PTB), a known neurodevelopmental risk factor, has been associated with FP at initial screening. We aim to further characterize this association by stratifying PTB by gestational age and delivery circumstance. To do this, we analyzed birth certificate and Early Hearing Detection & Intervention data from the Michigan Dept. of Health & Human Services (2007–2015; n = 919,363). We restricted our analysis to singleton live births with available ABR-based hearing screening data and obstetric estimates of gestational age (n = 655,079). We then used logistic regression to evaluate the association of PTB defined by gestational age (extreme: < 28 weeks; moderate: 28–34 weeks; late: 34–36 weeks) and delivery circumstance (spontaneous, medically indicated) with FP, using full-term birth (≥ 37 weeks) as the referent group. Approximately 4% of infants had FP findings. All gestational age categories were associated with this phenomenon (extreme: OR = 4.2, 95% CI 3.7, 4.7; moderate: OR = 1.2, 95% CI 1.1, 1.3; late: 1.6, 95% CI 1.5, 1.7). Spontaneous and medically indicated PTB were also associated with FP (OR = 1.7, 95% CI 1.6, 1.8; OR = 1.4, 95% CI 1.3, 1.5, respectively). All results persisted following adjustment for socio-demographic and antepartum factors except for moderate PTB (OR = 1.0, 95% CI 0.9, 1.1), though sensitivity analyses suggested marked heterogeneity within this group. Further research is needed to investigate factors underlying these differences and whether they correlate with neurodevelopmental disorder diagnoses.
Show less
- Title
- DESIGN AND ENGINEERING OF STARCH-BASED POLYMER MATERIALS AS SUBSTITUTES FOR PERSISTENT NON-BIODEGRADABLE PLASTICS
- Creator
- Kulkarni, Apoorva Chandrakant
- Date
- 2022
- Collection
- Electronic Theses & Dissertations
- Description
-
Replacing carbon-carbon backbone persistent hydrocarbon plastics with biobased and biodegradable plastics offers value proposition of reduced carbon footprint and an environmentally responsible end-of-life. This work focuses on design and engineering of starch based polymeric materials as substitutes for non-biodegradable plastics. The biodegradability of these bioplastics in aqueous environment was evaluated.Starch foams are being used as replacement for petroleum-based foams in insulation...
Show moreReplacing carbon-carbon backbone persistent hydrocarbon plastics with biobased and biodegradable plastics offers value proposition of reduced carbon footprint and an environmentally responsible end-of-life. This work focuses on design and engineering of starch based polymeric materials as substitutes for non-biodegradable plastics. The biodegradability of these bioplastics in aqueous environment was evaluated.Starch foams are being used as replacement for petroleum-based foams in insulation and cushion protection applications. However, moisture sensitivity remains a problem resulting in collapse of cell structure and loss of mechanical integrity. First section of the thesis focuses on engineering high-performance starch foams with enhanced moisture resistance using reactive extrusion processing technology. Chitosan, polyvinyl butyral (PVB) and sodium trimetaphosphate (STMP) were used with water as a plasticizer and a blowing agent to make foams with desired physico-mechanical properties. The resulting foams were hydrophobic, insoluble in water, and showed improved moisture resistance. The biodegradability of the foams was not impacted – they were completely biodegradable as established by ASTM/ISO standards. Crosslinking of starch with STMP increased the compressive strength of the foams by three times compared to control foams. Optimization of process parameters ensured an efficient, cost-effective route towards commercialization. In the second section, our group’s chemically modified thermoplastic starch (MTPS) prepared by reactive extrusion technology was explored in three different applications. First, MTPS, was melt blended with glycol modified polyethylene terephthalate (PETG) using transesterification chemistry to synthesize MTPS-g-PETG in situ graft copolymer with 33% grafting. Mechanical and thermal properties of the blend were evaluated and compared with neat PETG. The addition of starch into PETG molecular backbone did not result in PETG biodegradability. This finding refutes many claims of biodegradability of non-biodegradable polymers by the addition of starch and similar additives. in the marketplace. Second, the use of MTPS as a biobased and biodegradable nucleating agent and barrier property enhancer in polylactide (PLA) was explored. Our findings establish that MTPS accelerates the rate of crystallization of PLA (polylactide polymers) by up to 98 times at 100°C, reducing the half time for crystallization from 20 mins to less than 1 minute. Oxygen barrier properties of PLA was improved 127% without causing detrimental impact on mechanical properties or biodegradability. Third application focuses on using MTPS as a carrier for iodine, which is a very effective and strong antimicrobial agent. The antimicrobial starch-iodine complex in pellet form was manufactured by extrusion processing. The new MTPS-iodine complex was incorporated in various proportions to commercial fully biodegradable-compostable polyester films. The morphological, mechanical, and antibacterial properties of these films were evaluated and compared with current commercial additives used to obtain antibacterial properties. The last section focuses on the end-of-life evaluations for biobased and biodegradable plastics using kinetics approach. The effect of temperature on biodegradation of cellulose and Poly(3-hydroxybutyrate-co-3-hydroxyvalerate) (PHBV) in an aqueous environment seeded with a biologically aggressive microbial inoculum was studied. A global equation was derived from the reparametrized Arrhenius equation and the kinetic rate law to estimate the time required for 90% removal of polymer from the low temperature ocean environment. The t90 (time required to remove 90% of the polymer carbon from the environment) for PHBV at 10°C ranged from 6.2-6.9 years. The t90 of cellulose at 10 C was found to be 1.1-1.2 years. ASTM/ISO standards for measuring and reporting ocean biodegradability is static and conducted at one temperature (30°C), whereas ocean temperatures can vary from −1.8 °C to 33.4 °C. The kinetic analysis and model developed can provide a method to estimate time for complete removal of the biodegradable polymer carbon in ocean environments.
Show less
- Title
- Improving the Efficiency of Residential Buildings in Rural Alaska : An Analysis of Existing Infrastructure and Its Importance in Creating Energy-Efficient Homes
- Creator
- Milan, Maria
- Date
- 2022
- Collection
- Electronic Theses & Dissertations
- Description
-
Rural communities in Alaska, many of which have a high Alaska Native population, currently face significant housing challenges. In a climate that can become extremely cold, houses are generally lacking in energy efficiency. Residents pay high rates for both oil and electricity to heat and operate their homes. Their homes are much smaller and with higher occupancy in comparison to typical U.S. homes. In addition, as the winter season brings sub-zero temperatures, windows remain shut with...
Show moreRural communities in Alaska, many of which have a high Alaska Native population, currently face significant housing challenges. In a climate that can become extremely cold, houses are generally lacking in energy efficiency. Residents pay high rates for both oil and electricity to heat and operate their homes. Their homes are much smaller and with higher occupancy in comparison to typical U.S. homes. In addition, as the winter season brings sub-zero temperatures, windows remain shut with limited mechanical or natural ventilation, also creating indoor air quality concerns. To date there has been limited studies of rural Alaskan communities’ housing, to better understand and quantify such housing challenges. This research focuses on a detailed study of the rural Alaskan community of Unalakleet. Located on the western shore of Alaska, this coastal community of approximately 765 people faces many housing challenges, similar to other rural Alaskan communities. Through collaboration with the housing authority in Unalakleet, this research conducted home energy assessments and resident interviews in the summer of 2021, including 27 energy assessments and 22 interviews. The purpose of the energy assessments was to collect information on typical building features and data, as well as to identify housing challenges. The interviews were generally completed for homes that also had an energy assessment and were used to better understand residents’ perspectives on energy and housing challenges beyond what was observed in the assessments. Blower door tests suggest that homes are usually small and tight, with leakiness around areas like the windows, where mold frequently was observed. Short-term indoor air quality monitoring suggests that some homes, especially those less than 46 m2 , had high CO2 concentrations, relative to others. Some homes had significant mold growth, and others had many areas of damage. Still, interviews with community members suggest that they were grateful for their housing and the ability to live in Unalakleet. The overall purpose of this research is to provide evidence to quantify the typical housing characteristics presence in rural Alaskan communities, as well as to provide results that motivate and support opportunities for new, more efficient housing. The introduction discusses major housing challenges, from high energy bills to the history of inefficient infrastructure in rural Alaska. Physical characteristics of assessed homes, followed by indoor air quality and air flow, are discussed in Chapters 2 and 3, respectively. Finally, Chapter 4 uses data collected from the 27 housing assessments to create a building energy model, where energy usage in existing housing is modeled to represent a ‘typical’ rural Alaskan home. The results of this model show where the largest improvements in efficiency can be made, especially in adding higher R-value insulation. The conclusion provides a brief overview of research contribution, limitations, and future work.
Show less
- Title
- Optimizing and Improving the Fidelity of Reactive, Polarizable Molecular Dynamics Simulations on Modern High Performance Computing Architectures
- Creator
- O'Hearn, Kurt A.
- Date
- 2022
- Collection
- Electronic Theses & Dissertations
- Description
-
Reactive, polarizable molecular dynamics simulations are a crucial tool for the high-fidelity study of large systems with chemical reactions. In support of this, several approaches have been employed with varying degrees of computational cost and physical accuracy. One of the more successful approaches in recent years, the reactive force field (ReaxFF) model, wasdesigned to fill the gap between traditional classical models and quantum mechanical models by incorporating a dynamic bond order...
Show moreReactive, polarizable molecular dynamics simulations are a crucial tool for the high-fidelity study of large systems with chemical reactions. In support of this, several approaches have been employed with varying degrees of computational cost and physical accuracy. One of the more successful approaches in recent years, the reactive force field (ReaxFF) model, wasdesigned to fill the gap between traditional classical models and quantum mechanical models by incorporating a dynamic bond order potential term. When coupling ReaxFF with dynamic global charges models for electrostatics, special considerations are necessary for obtaining highly performant implementations, especially on modern high-performance computing architectures.In this work, we detail the performance optimization of the PuReMD (PuReMD Reactive Molecular Dynamics) software package, an open-source, GPLv3-licensed implementation of ReaxFF coupled with dynamic charge models. We begin byexploring the tuning of the iterative Krylov linear solvers underpinning the global charge models in a shared-memory parallel context using OpenMP, with the explicit goal of minimizing the mean combined preconditioner and solver time. We found that with appropriate solver tuning, significant speedups and scalability improvements were observed. Following these successes, we extend these approaches to the solvers in the distributed-memory MPI implementation of PuReMD, as well as broaden the scope of optimization to other portions of the ReaxFF potential such as the bond order computations. Here again we find that sizable performance gains were achieved for large simulations numbering in the hundreds of thousands of atoms.With these performance improvements in hand, we next change focus to another important use of PuReMD -- the development of ReaxFF force fields for new materials. The high fidelity inherent in ReaxFF simulations for different chemistries oftentimes comes at the expense of a steep learning curve for parameter optimization, due in part to complexities in the high dimensional parameter space and due in part to the necessity of deep domain knowledge of how to adequately control the ReaxFF functional forms. To diagnose and combat these issues, a study was undertaken to optimize parameters for Li-O systems using the OGOLEM genetic algorithms framework coupled with a modified shared-memory version of PuReMD. We found that with careful training set design, sufficient optimization control with tuned genetic algorithms, and improved polarizability through enhanced charge model use, higher accuracy was achieved in simulations involving ductile fracture behavior, a difficult phenomena to hereto model correctly.Finally, we return to performance optimization for the GPU-accelerated distributed-memory PuReMD codebase. Modern supercomputers have recently achieved exascale levels of peak arithmetic rates due in large part to the design decision to incorporate massive numbers of GPUs. In order to take advantage of such computing systems, the MPI+CUDA version of PuReMD was re-designed and benchmarked on modern NVIDIA Tesla GPUs. Performance on-par with or exceeding the LAMMPS Kokkos, a ReaxFF implementation developed at Scandia National Laboratories, with PuReMD typically out-performing LAMMPS Kokkos at larger scales.
Show less
- Title
- VISIONING THE AGRICULTURE BLOCKCHAIN : THE ROLE AND RISE OF BLOCKCHAIN IN THE COMMERCIAL POULTRY INDUSTRY
- Creator
- Fennell, Chris
- Date
- 2022
- Collection
- Electronic Theses & Dissertations
- Description
-
Blockchain is an emerging technology that is being explored by technologists and industry leaders as a way to revolutionize the agriculture supply chain. The problem is that human and ecological insights are needed to understand the complexities of how blockchain could fulfill these visions. In this work, I assert how the blockchain's promising vision of traceability, immutability and distributed properties presents advancements and challenges to rural farming. This work wrestles with the...
Show moreBlockchain is an emerging technology that is being explored by technologists and industry leaders as a way to revolutionize the agriculture supply chain. The problem is that human and ecological insights are needed to understand the complexities of how blockchain could fulfill these visions. In this work, I assert how the blockchain's promising vision of traceability, immutability and distributed properties presents advancements and challenges to rural farming. This work wrestles with the more subtle ways the blockchain technology would be integrated into the existing infrastructure. Through interviews and participatory design workshops, I talked with an expansive set of stakeholders including Amish farmers, contract growers, senior leadership and field supervisors. This research illuminates that commercial poultry farming is such a complex and diffuse system that any overhaul of its core infrastructure will be difficult to ``roll back'' once blockchain is ``rolled out.'' Through an HCI and sociotechnical system perspective, drawing particular insights from Science and Technology Studies theories of infrastructure and breakdown, this dissertation asserts three main concerns. First, this dissertation uncovers the dominant narratives on the farm around revision and ``roll back'' of blockchain, connecting to theories of version control from computer science. Second, this work uncovers that a core concern of the poultry supply chain is death and I reveal the sociotechnical and material implications for the integration of blockchain. Finally, this dissertation discusses the meaning of ``security’’ for the poultry supply chain in which biosecurity is prioritized over cybersecurity and how blockchain impacts these concerns. Together these findings point to significant implications for designers of blockchain infrastructure and how rural workers will integrate the technology into the supply chain.
Show less
- Title
- EXPLORING STUDENTS’ UNDERSTANDING OF INTERACTIONS AND ENERGY ACROSS CHEMISTRY AND BIOLOGY
- Creator
- Noyes, Keenan Chun Hong Lee
- Date
- 2022
- Collection
- Electronic Theses & Dissertations
- Description
-
One of the goals of science education is to help students make sense of the world around them. To that end, it is critical that students understand the central ideas in each discipline like, in chemistry, energy and interactions. These ideas are of particular importance because they are directly related to one another and are relevant across other science disciplines. Unfortunately, researchers have found that students often struggle to develop a deep understanding of these ideas. To uncover...
Show moreOne of the goals of science education is to help students make sense of the world around them. To that end, it is critical that students understand the central ideas in each discipline like, in chemistry, energy and interactions. These ideas are of particular importance because they are directly related to one another and are relevant across other science disciplines. Unfortunately, researchers have found that students often struggle to develop a deep understanding of these ideas. To uncover better ways to support students’ learning, I explored how students understand interactions and energy in both chemistry and biology.In this dissertation, I focused on London dispersion forces (LDFs), a type of intermolecular force (IMF) which occurs between all atoms and molecules. Specifically, I used the lens of causal mechanistic reasoning to think about students’ knowledge. That is, how students connect the properties and behaviors of the underlying entities to the overall phenomenon. If we can help students to develop this type of understanding, they may be able to make powerful predictions about new, unfamiliar phenomena in which IMFs play an important role. Additionally, I explored how students thought about the energy changes which result from the formation of LDFs. Lastly, I designed assessments to elicit and characterize explanations of protein-ligand binding, a biological phenomenon governed by IMFs. To explore these questions, I used a mix of qualitative and quantitative techniques. I designed tasks to elicit causal mechanistic responses from students, using students’ responses to refine the task design. I also developed coding schemes to characterize students’ engagement in causal mechanistic reasoning. Furthermore, I developed and used automated resources to analyze thousands of responses in a matter of minutes. In these studies, I focused primarily on undergraduate students enrolled in Chemistry, Life, the Universe, and Everything (CLUE), a transformed, core-idea centered general chemistry curriculum. From these studies, I found that the majority of CLUE students could leverage electrostatic ideas to explain LDFs, and that a meaningful proportion of those students could provide a full causal mechanistic account. This highlights the importance of emphasizing these interactions, and the mechanism by which they form, throughout the general chemistry course sequence. Additionally, students who used causal mechanistic reasoning to discuss LDFs were more likely to use that same reasoning in the context of the associated changes in potential energy. However, this relationship was weaker among those providing a partially causal mechanistic response. This suggests that more work needs to be done to find ways of supporting students to connect the ideas of interactions and energy. Additionally, in this thesis, I describe the process by which I used iterative design to develop a task eliciting causal mechanistic explanations of a biological phenomenon. In future work, these materials can be used to explore how broader groups of students engage with this task in an effort to foster interdisciplinary coherence.
Show less
- Title
- Creating Opportunities for Learning for English Learners with Disabilities Through Quality Individualized Education Programs
- Creator
- Paul, Jennifer Maria
- Date
- 2022
- Collection
- Electronic Theses & Dissertations
- Description
-
Understanding the needs of students who are English learners (ELs) and are also students with disabilities has become an area of policy and research in recent years. Uncovering the needs of this student group requires a closer examination and understanding of the Individualized Education Program (IEP). The IEP is a foundational process and document used by all educators of students identified as students with disabilities to help inform those students’ education. The IEP defines everything...
Show moreUnderstanding the needs of students who are English learners (ELs) and are also students with disabilities has become an area of policy and research in recent years. Uncovering the needs of this student group requires a closer examination and understanding of the Individualized Education Program (IEP). The IEP is a foundational process and document used by all educators of students identified as students with disabilities to help inform those students’ education. The IEP defines everything from a student’s current abilities to their goals and even identifies educators’ plans for instruction and support. The IEP is the ‘road map’ for students’ classroom experiences. But what should be included in the ‘road map’ for students with disabilities that are also ELs? The need to answer this question is of great importance as there is currently no guidance from the Michigan Department of Education for Michigan educators on this topic. It is likely that because of an absence of this magnitude that students would bear the brunt of its absence. The negative repercussions come in the form of a potential lack of students’ learning opportunities specific to their needs as English learners. Through the use of the opportunity to learn (OTL) framework developed by Kurtz and Elliott (2011) my research investigates how educators can improve a student’s OTL within the IEPs they conduct and write. My dissertation explores the barriers educators experience as they develop IEPs for this group of students. It will also consider when educators should include EL’s specific needs within the IEP. The study will also recommend that educators use a tool created through my research to improve OTL for ELs with disabilities within the IEPs on which they work.
Show less
- Title
- THE ORIGINATION AND IMPLEMENTATION OF THE NATIONAL WETLANDS POLICY OF UGANDA : ENVIRONMENT, KNOWLEDGE, AND POWER FROM THE LATE NINETEENTH CENTURY TO PRESENT
- Creator
- Doyle-Raso, John
- Date
- 2022
- Collection
- Electronic Theses & Dissertations
- Description
-
In the 1980s, following widespread environmental and intellectual changes associated with “swamp reclamation” that in Uganda had started in the early twentieth century, proponents of the emerging science of “wetland conservation” sought to influence the practices and thinking of people across the country. To do so, they created a national wetlands policy based on decentralized “community-based” projects. Yet, farmers’ and investors’ engagements with reclamation have continued. Meanwhile, the...
Show moreIn the 1980s, following widespread environmental and intellectual changes associated with “swamp reclamation” that in Uganda had started in the early twentieth century, proponents of the emerging science of “wetland conservation” sought to influence the practices and thinking of people across the country. To do so, they created a national wetlands policy based on decentralized “community-based” projects. Yet, farmers’ and investors’ engagements with reclamation have continued. Meanwhile, the Ugandan wetlands policy became internationally influential for its groundbreaking approach to interdisciplinary questions about governance, emphasizing economic analyses based on concepts such as “ecosystem services” and “Environmental Economic Valuation.” Ugandan wetland conservationists have had more influence abroad than domestically, as in Uganda neoliberalization and recentralization have limited the power of the community-based groups who have worked through the national policy. Using a range of sources including but not limited to archives and interviews with conservationists, this dissertation historicizes the Ugandan wetlands conservation policy. It comprises two parts addressing overlapping time periods. The first three chapters consider the origination of this policy by analyzing environmental and intellectual changes in southeastern and southwestern Uganda, leading to the creation in the late-twentieth century of environmental regulations. The latter three chapters examine how conservationists have tried implementing the policy in rural and urban places, and in relation to the national emblem of Uganda – the Grey Crowned Crane. They have focused their efforts on community-based projects outside Protected Areas promoting indigenous knowledges and practices to obtain economic benefits from wetlands that conservationists. This approach was an early manifestation of connected trends in international developmentalist networks. Furthermore, the limitations on its implementation have become pivotal in the global histories of neoliberalization, decentralization, and recentralization. Historicizing Ugandan wetland conservationism contributes to four scholarly literatures. 1) Analyzing community-based projects outside “Protected Areas” advances the historiographies of conservation and watershed management in Africa by considering the significances of neoliberalization, decentralization, and recentralization beyond extraordinary legal cases. 2) Examining intellectual changes in this history – including an emphasis on community-based projects, use of the concept of ecosystem services, and the promotion of indigenous knowledges and sciences – reveals connections between changes in environmental science and global trends in developmentalism. 3) Focusing on these changes in Uganda builds on analyses of environmental management in political power there by identifying the importance of an underexamined resource in entrenched land conflicts, and by uncovering early institutional bases of recentralization. 4) Because Ugandan wetland conservationists were global leaders in policy creation, citizen science, and more changes in scientific thinking, researching their work reveals how African scientists have navigated tensions between their local, national, and international interlocutors to become internationally influential. Studying the history of Ugandan wetland conservationism reveals how different people’s engagements with changes in environmental thinking have reshaped environments and livelihoods, as well as influenced international scientific networks.
Show less
- Title
- USE OF LAGRANGIAN METHODS TO SIMULATE HEAVY STORM-INDUCED RIVER PLUME DYNAMICS AND RECREATIONAL WATER QUALITY IMPACTS IN THE NEARSHORE REGION OF SOUTHWESTERN LAKE MICHIGAN
- Creator
- Weiskerger, Chelsea
- Date
- 2022
- Collection
- Electronic Theses & Dissertations
- Description
-
The Great Lakes are the primary source of drinking water for nearly 30 million people in the region. During storm events runoff from upstream watersheds and (combined) sewer overflows delivers pathogens to the Lakes. The pathogens are then transported to beaches and water intakes by the lake circulation, posing risks to human health. Fecal indicator organisms such as Escherichia coli are used to track pollution levels and to take proactive measures to manage coastal resources and to safeguard...
Show moreThe Great Lakes are the primary source of drinking water for nearly 30 million people in the region. During storm events runoff from upstream watersheds and (combined) sewer overflows delivers pathogens to the Lakes. The pathogens are then transported to beaches and water intakes by the lake circulation, posing risks to human health. Fecal indicator organisms such as Escherichia coli are used to track pollution levels and to take proactive measures to manage coastal resources and to safeguard public health by closing beaches to the public, issuing swimming advisories, etc. Predictive modeling of coastal water quality continues to be an attractive approach to generate water quality forecasts and to gain insights into key processes. Although progress has been made in understanding and quantifying the impacts of tributary loading and river plumes on microbial pollution at beaches, the impacts of extreme storm events on coastal water quality are not well-understood. As the frequency and intensity of storm events increase, the pollution footprint of extreme storm events has not been quantified in a way that can be used to inform policy. Complex nearshore features, including irregular coastlines and coastal structures calls for high-resolution modeling that is computationally demanding. While traditional Eulerian approaches to plume modeling have been previously used, comparisons with available observed plume data indicated that Lagrangian particle tracking improves prediction of plume dimensions (and hence risks) in southwestern Lake Michigan. Therefore, coupled hydrodynamic and reactive particle tracking models were developed and tested to simulate the complex dynamics of multiple river plumes induced by extreme storm events in the Chicago area in southwestern Lake Michigan. The present-day Chicago River normally flows to the Mississippi River and discharges into Lake Michigan only during “backflow” events triggered by these storms. Simulations of extreme storm-induced river plumes during years 2008, 2010, 2011, 2013 and 2017 were reported and models tested using available data on currents, water temperatures, concentrations of indicator bacteria (E. coli) and the spatial extent of turbidity plumes using MODIS Terra satellite imagery. Results suggest that plumes associated with the extreme storms persist along the Chicago shoreline for up to 24 days after the commencement of backflow release and that plume areas of influence range from 7.9 to 291 km2 in the nearshore. Plume spatiotemporal dynamics were largely related to the volume of water released via backflow events and the duration of the backflow releases. Empirical relations were proposed to allow beach and stormwater managers to predict plume spatiotemporal dynamics in real time. Model results from a Lagrangian E. coli fate and transport model were compared against monitoring data collected at 16-18 beaches during and after backflow events in 2010 and 2011. Results indicate that all Chicago Park District beaches are susceptible to E. coli concentrations that exceed USEPA thresholds for safe recreation after extreme storms. Therefore, the current approach to beach management, which involves closing all beaches during and immediately after backflow events, is likely prudent. However, results also suggest that beaches are probably being reopened prematurely after storm events, as beaches may be at risk for degraded water quality for multiple days, post backflow event. To address data gaps, we recommend that future research focus on the collection of additional in situ hydrometeorological and water quality data during and after extreme storms and backflow events. These data may be collected using unmanned aerial vehicles or autonomous sensor systems.
Show less
- Title
- Design and Analysis of Sculpted Rotor Interior Permanent Magnet Machines
- Creator
- Hayslett, Steven Lee
- Date
- 2022
- Collection
- Electronic Theses & Dissertations
- Description
-
Design of interior permanent magnet electrical machines is complex. Interior permanent magnet machines offer a good balance of cost, efficiency, and torque/power density. Maximum torque and power production of an interior permanent magnet machine is achieved through balancing design choices related to the permanent magnet and salient features. The embedded magnet within the salient structure of the rotor lamination results in an increase in harmonic content. In addition, interaction of the...
Show moreDesign of interior permanent magnet electrical machines is complex. Interior permanent magnet machines offer a good balance of cost, efficiency, and torque/power density. Maximum torque and power production of an interior permanent magnet machine is achieved through balancing design choices related to the permanent magnet and salient features. The embedded magnet within the salient structure of the rotor lamination results in an increase in harmonic content. In addition, interaction of the armature, control angle, and rotor reluctance structure creates additional harmonic content. These harmonics result in increased torque ripple, radial forces, losses, and other unwanted phenomena. Further improvements in torque and power density, and techniques to minimize harmonics, are necessary. Typical interior permanent magnet machine design results at the maximum torque per amp condition are at neither the maximum magnet nor maximum salient torque, but at the best combination of the two. The use of rotor surface features to align the magnet and the reluctance axis allows for improvement of torque and power density. Reduction of flux and torque harmonics is also possible through careful design of rotor sculpt features that are included at or near the surface of the rotor. Finite element models provide high fidelity and accurate results to machine performance but do not give insight into the relationship between design parameters and performance. Winding factor models describe the machine with a set of Fourier series equations, providing access to the harmonic information of both parameters and performance. Direct knowledge of this information provides better insight, a clear understanding of interactions, and the ability to develop a more efficient design process. A new analytical winding function model of the single-V IPM machine is introduced, which considers the sculpted rotor and how this model can be used in the design approach of machines.Rotor feature trends are established and utilized to increase design intuition and reduce dependency upon the lengthy design of experiment optimization processes. The shape and placement of the rotor features, derived from the optimization process, show the improvement in torque average and torque ripple of the IPM machine.
Show less
- Title
- “BLACK, SET, SPIKE : ” AN ANALYSIS OF THE RACIAL EXPERIENCES OF BLACK FEMALE VOLLEYBALL PLAYERS IN EUROPE
- Creator
- Fry, Jen
- Date
- 2022
- Collection
- Electronic Theses & Dissertations
- Description
-
Sports and geography each profoundly impact the lived and professional experiences of Black female athletes. These experiences also significantly shape their personal and professional identities, as both deal with the occupation of space and the way people move and interact in geographic spaces. Little attention has been paid by the academic and athletic communities to the lived experiences of professional athletes who play abroad. Currently, minimal research has been conducted on the...
Show moreSports and geography each profoundly impact the lived and professional experiences of Black female athletes. These experiences also significantly shape their personal and professional identities, as both deal with the occupation of space and the way people move and interact in geographic spaces. Little attention has been paid by the academic and athletic communities to the lived experiences of professional athletes who play abroad. Currently, minimal research has been conducted on the experiences of Black female volleyball players (BFVPs) who have played in Europe and how race, gender identity, space, and sports affected their lived experiences abroad based on their identities. This dissertation utilized qualitative methods to analyze the racial experiences of Black women who have played professional volleyball in Europe and whose experiences have not been documented within studies of geography—or, more specifically, within perspectives of Black feminist thought, Black geographies, and theory of racial space. The goal of this dissertation was twofold: (a) explore how intersecting racial and gendered identities, place, and space influenced the racism encountered by U.S. BFVPs in Europe; and (b) provide a source of information for future Black female college athletes who want to play professionally but do not know what they do not know. By developing a body of literature within sports geography on the overlooked and unresearched experiences of professional Black female athletes (BFAs), I contributed to the ever-increasing body of literature on BFAs across various disciplines. Some of the discoveries from my research were that BFVPs experienced racism in ways similar to what they experienced within the United States, such as being oversexualized, expected to play up racially stereotypical views of Black women, and having their hair touched without their consent. They also experienced racism in wildly different ways, such as being spit on, teammates withholding English skills, and accusations of prostitution. When conducting my research, a qualitative approach of a brief demographic survey of 15 questions was sent to over 100 current and former BFVPs; I used these data to narrow down participants. There was a response rate of more than 50%, which resulted in 60 women filling out the survey; of that population, 51 checked yes to interest in being interviewed, and nine checked no to denote no interest in being interviewed. Based on criteria of the number of years played, countries played in, and teams played for, I narrowed the sample to 18 participants willing to participate in qualitative interviews. The theoretical frameworks of Black feminist thought, Black geographies, and theory of racial space were used to understand the experiences of the participants and helped me create a new conceptual framework called critical Black feminist sports geographies.
Show less
- Title
- GOOD AT THIS BUT NOT AT THAT : MULTIDIMENSIONAL SELF-EVALUATIONS AND DIMENSIONAL COMPARISONS AT WORK
- Creator
- Mitchell, Rebecca
- Date
- 2022
- Collection
- Electronic Theses & Dissertations
- Description
-
Social comparison theory (Festinger, 1954) underlies findings and theory in many organizational behavior literatures, such as identity, justice, and compensation. Yet, the field has neglected to incorporate comparison theories introduced in other psychology literatures. Dimensional comparison theory (DCT; (Möller & Marsh, 2013) argues that, in addition to external comparisons to referent other, individuals also make internal comparisons across different dimensions of the self, defined within...
Show moreSocial comparison theory (Festinger, 1954) underlies findings and theory in many organizational behavior literatures, such as identity, justice, and compensation. Yet, the field has neglected to incorporate comparison theories introduced in other psychology literatures. Dimensional comparison theory (DCT; (Möller & Marsh, 2013) argues that, in addition to external comparisons to referent other, individuals also make internal comparisons across different dimensions of the self, defined within a multidimensional self-evaluation. This dissertation argues that DCT is related to, but distinct from, existing concepts within organizational behavior and is thus critical to integrate into our understanding of work. In three studies, a vignette study, one experiment, and one field study, I propose examining the effect that dimensional comparisons along these abilities have on individuals’ psychological investment as well as the resulting achievement and satisfaction in these dimensions. Further, I build upon existing DCT research in educational psychology through explicitly hypothesizing the interactive effect of dimensional and social comparisons, considering the role that the importance of the dimension to the group and the individual plays in these relationships, and examining dimensional comparisons using polynomial regression techniques.
Show less
- Title
- TENSOR LEARNING WITH STRUCTURE, GEOMETRY AND MULTI-MODALITY
- Creator
- Sofuoglu, Seyyid Emre
- Date
- 2022
- Collection
- Electronic Theses & Dissertations
- Description
-
With the advances in sensing and data acquisition technology, it is now possible to collect datafrom different modalities and sources simultaneously. Most of these data are multi-dimensional in nature and can be represented by multiway arrays known as tensors. For instance, a color image is a third-order tensor defined by two indices for spatial variables and one index for color mode. Some other examples include color video, medical imaging such as EEG and fMRI, spatiotemporal data...
Show moreWith the advances in sensing and data acquisition technology, it is now possible to collect datafrom different modalities and sources simultaneously. Most of these data are multi-dimensional in nature and can be represented by multiway arrays known as tensors. For instance, a color image is a third-order tensor defined by two indices for spatial variables and one index for color mode. Some other examples include color video, medical imaging such as EEG and fMRI, spatiotemporal data encountered in urban traffic monitoring, etc.In the past two decades, tensors have become ubiquitous in signal processing, statistics andcomputer science. Traditional unsupervised and supervised learning methods developed for one- dimensional signals do not translate well to higher order data structures as they get computationally prohibitive with increasing dimensionalities. Vectorizing high dimensional inputs creates problems in nearly all machine learning tasks due to exponentially increasing dimensionality, distortion of data structure and the difficulty of obtaining sufficiently large training sample size.In this thesis, we develop tensor-based approaches to various machine learning tasks. Existingtensor based unsupervised and supervised learning algorithms extend many well-known algorithms, e.g. 2-D component analysis, support vector machines and linear discriminant analysis, with better performance and lower computational and memory costs. Most of these methods rely on Tucker decomposition which has exponential storage complexity requirements; CANDECOMP-PARAFAC (CP) based methods which might not have a solution; or Tensor Train (TT) based solutions which suffer from exponentially increasing ranks. Many tensor based methods have quadratic (w.r.t the size of data), or higher computational complexity, and similarly, high memory complexity. Moreover, existing tensor based methods are not always designed with the particular structure of the data in mind. Many of the existing methods use purely algebraic measures as their objective which might not capture the local relations within data. Thus, there is a necessity to develop new models with better computational and memory efficiency, with the particular structure of the data and problem in mind. Finally, as tensors represent the data with more faithfulness to the original structure compared to the vectorization, they also allow coupling of heterogeneous data sources where the underlying physical relationship is known. Still, most of the current work on coupled tensor decompositions does not explore supervised problems.In order to address the issues around computational and storage complexity of tensor basedmachine learning, in Chapter 2, we propose a new tensor train decomposition structure, which is a hybrid between Tucker and Tensor Train decompositions. The proposed structure is used to imple- ment Tensor Train based supervised and unsupervised learning frameworks: linear discriminant analysis (LDA) and graph regularized subspace learning. The algorithm is designed to solve ex- tremal eigenvalue-eigenvector pair computation problems, which can be generalized to many other methods. The supervised framework, Tensor Train Discriminant Analysis (TTDA), is evaluated in a classification task with varying storage complexities with respect to classification accuracy and training time on four different datasets. The unsupervised approach, Graph Regularized TT, is evaluated on a clustering task with respect to clustering quality and training time on various storage complexities. Both frameworks are compared to discriminant analysis algorithms with similar objectives based on Tucker and TT decompositions.In Chapter 3, we present an unsupervised anomaly detection algorithm for spatiotemporaltensor data. The algorithm models the anomaly detection problem as a low-rank plus sparse tensor decomposition problem, where the normal activity is assumed to be low-rank and the anomalies are assumed to be sparse and temporally continuous. We present an extension of this algorithm, where we utilize a graph regularization term in our objective function to preserve the underlying geometry of the original data. Finally, we propose a computationally efficient implementation of this framework by approximating the nuclear norm using graph total variation minimization. The proposed approach is evaluated for both simulated data with varying levels of anomaly strength, length and number of missing entries in the observed tensor as well as urban traffic data. In Chapter 4, we propose a geometric tensor learning framework using product graph structures for tensor completion problem. Instead of purely algebraic measures such as rank, we use graph smoothness constraints that utilize geometric or topological relations within data. We prove the equivalence of a Cartesian graph structure to TT-based graph structure under some conditions. We show empirically, that introducing such relaxations due to the conditions do not deteriorate the recovery performance. We also outline a fully geometric learning method on product graphs for data completion.In Chapter 5, we introduce a supervised learning method for heterogeneous data sources suchas simultaneous EEG and fMRI. The proposed two-stage method first extracts features taking the coupling across modalities into account and then introduces kernelized support tensor machines for classification. We illustrate the advantages of the proposed method on simulated and real classification tasks with small number of training data with high dimensionality.
Show less
- Title
- Greatest common divisors near S-units, applications, and conjectures on arithmetic abelian surfaces
- Creator
- Xiao, Zheng
- Date
- 2022
- Collection
- Electronic Theses & Dissertations
- Description
-
We bound the greatest common divisor of two coprime multivariable polynomials evaluated at algebraic numbers, generalizing the work of Levin by thickening the finitely generated group to allow non-finitely generated elements. Going towards conjectured inequalities of Silverman and Vojta, an immediate corollary shows a similar inequality without a normal crossing assumption. The proofs rely on Schmidt's Subspace Theorem.As an application, we prove results on the greatest common divisors of...
Show moreWe bound the greatest common divisor of two coprime multivariable polynomials evaluated at algebraic numbers, generalizing the work of Levin by thickening the finitely generated group to allow non-finitely generated elements. Going towards conjectured inequalities of Silverman and Vojta, an immediate corollary shows a similar inequality without a normal crossing assumption. The proofs rely on Schmidt's Subspace Theorem.As an application, we prove results on the greatest common divisors of terms from two general linear recurrence sequences, extending the results of Levin, who considered the case where the linear recurrences are simple. The exceptional set is not as good as finitely many linear relations as in the simple case, but within the control of a logarithmic region, improving recent results of Grieve and Wang. An example shows that the logarithmic region is necessary.On abelian surfaces which come from the jacobians of hyperelliptic curves, we establish the connection between the GCD conjecture and the conjecture on arithmetic discriminant. It predicts, under particular situations, stronger inequality than Vojta's theorem of the arithmetic discriminant. We give some examples of extreme values of the arithmetic discriminant.
Show less