You are here
Search results
(21 - 40 of 79)
Pages
- Title
- EMERGENT COORDINATION : ADAPTATION, OPEN-ENDEDNESS, AND COLLECTIVE INTELLIGENCE
- Creator
- Bao, Honglin
- Date
- 2022
- Collection
- Electronic Theses & Dissertations
- Description
-
Agent-based modeling is a widely used computational method for studying the micro-macro bridge issue by simulating the microscopic interactions and observing the macroscopic emergence. This thesis begins with the fundamental methodology of agent-based models: how agents are represented, how agents interact, and how the agent population is structured. Two vital topics, the evolution of cooperation and opinion dynamics are used to illustrate methodological innovation. For the first topic, we...
Show moreAgent-based modeling is a widely used computational method for studying the micro-macro bridge issue by simulating the microscopic interactions and observing the macroscopic emergence. This thesis begins with the fundamental methodology of agent-based models: how agents are represented, how agents interact, and how the agent population is structured. Two vital topics, the evolution of cooperation and opinion dynamics are used to illustrate methodological innovation. For the first topic, we study the equilibrium selection in a coordination game in multi-agent systems. In particular, we focus on the characteristics of agents (supervisors and subordinates versus representative agents), the interactions of agents (reinforcement learning in the games with fixed versus adaptive learning rates according to the supervision and time-varying versus supervision-guided exploration rates), the network of agents (single-layer versus multi-layer networks), and their impact on the emergent behaviors. Regarding the second topic, we examine how opinions evolve and spread in a cognitively heterogeneous agent population with sparse interactions and how the opinion dynamics co-evolve with the open-ended society's structural change. We then discuss the rich insights into collective intelligence in the two proposed models viewed from the interaction-based adaptation and open-ended network structure. We finally link collective emergent intelligence to diverse applications in the realm of computing and other scientific fields in a cross-multidisciplinary manner.
Show less
- Title
- EXTENDED REALITY (XR) & GAMIFICATION IN THE CONTEXT OF THE INTERNET OF THINGS (IOT) AND ARTIFICIAL INTELLIGENCE (AI)
- Creator
- Pappas, Georgios
- Date
- 2021
- Collection
- Electronic Theses & Dissertations
- Description
-
The present research develops a holistic framework for and way of thinking about Deep Technologies related to Gamification, eXtended Reality (XR), the Internet of Things (IoT), and Artificial Intelligence (AI). Starting with the concept of gamification and the immersive technology of XR, we create interconnections with the IoT and AI implementations. While each constituent technology has its own unique impact, our approach uniquely addresses the combinational potential of these technologies...
Show moreThe present research develops a holistic framework for and way of thinking about Deep Technologies related to Gamification, eXtended Reality (XR), the Internet of Things (IoT), and Artificial Intelligence (AI). Starting with the concept of gamification and the immersive technology of XR, we create interconnections with the IoT and AI implementations. While each constituent technology has its own unique impact, our approach uniquely addresses the combinational potential of these technologies that may have greater impact than any technology on its own. To approach the research problem more efficiently, the methodology followed includes its initial division into smaller parts. For each part of the research problem, novel applications were designed and developed including gamified tools, serious games and AR/VR implementations. We apply the proposed framework in two different domains: autonomous vehicles (AVs), and distance learning.Specifically, in chapter 2, an innovative hybrid tool for distance learning is showcased where, among others, the fusion with IoT provides a novel pseudomultiplayer mode. This mode may transform advanced asynchronous gamified tools to synchronous by enabling or disabling virtual events and phenomena enhancing the student experience. Next, in Chapter 3, along with gamification, the combination of XR with IoT data streams is presented but this time in an automotive context. We showcase how this fusion of technologies provides low-latency monitoring of vehicle characteristics, and how this can be visualized in augmented and virtual reality using low-cost hardware and services. This part of our proposed framework provides the methodology of creating any type of Digital Twin with near real-time data visualization.Following that, in chapter 4 we establish the second part of the suggested holistic framework where Virtual Environments (VEs), in general, can work as synthetic data generators and thus, be a great source of artificial suitable for training AI models. This part of the research includes two novel implementations the Gamified Digital Simulator (GDS) and the Virtual LiDAR Simulator.Having established the holistic framework, in Chapter 5, we now “zoom in” to gamification exploring deeper aspects of virtual environments and discuss how serious games can be combined with other facets of virtual layers (cyber ranges,virtual learning environments) to provide enhanced training and advanced learning experiences. Lastly, in chapter 6, “zooming out” from gamification an additional enhancement layer is presented. We showcase the importance of human-centered design of via an implementation that tries to simulate the AV-pedestrian interactions in a virtual and safe environment.
Show less
- Title
- Efficient Distributed Algorithms : Better Theory and Communication Compression
- Creator
- LI, YAO
- Date
- 2022
- Collection
- Electronic Theses & Dissertations
- Description
-
Large-scale machine learning models are often trained by distributed algorithms over either centralized or decentralized networks. The former uses a central server to aggregate the information of local computing agents and broadcast the averaged parameters in a master-slave architecture. The latter considers a connected network formed by all agents. The information can only be exchanged with accessible neighbors with a mixing matrix of communication weights encoding the network's topology....
Show moreLarge-scale machine learning models are often trained by distributed algorithms over either centralized or decentralized networks. The former uses a central server to aggregate the information of local computing agents and broadcast the averaged parameters in a master-slave architecture. The latter considers a connected network formed by all agents. The information can only be exchanged with accessible neighbors with a mixing matrix of communication weights encoding the network's topology. Compared with centralized optimization, decentralization facilitates data privacy and reduces the communication burden of the single central agent due to model synchronization, but the connectivity of the communication network weakens the theoretical convergence complexity of the decentralized algorithms. Therefore, there are still gaps between decentralized and centralized algorithms in terms of convergence conditions and rates. In the first part of this dissertation, we consider two decentralized algorithms: EXTRA and NIDS, which both converge linearly with strongly convex objective functions and answer two questions regarding them. \textit{What are the optimal upper bounds for their stepsizes?} \textit{Do decentralized algorithms require more properties on the functions for linear convergence than centralized ones?} More specifically, we relax the required conditions for linear convergence of both algorithms. For EXTRA, we show that the stepsize is comparable to that of centralized algorithms. For NIDS, the upper bound of the stepsize is shown to be exactly the same as the centralized ones. In addition, we relax the requirement for the objective functions and the mixing matrices. We provide the linear convergence results for both algorithms under the weakest conditions.As the number of computing agents and the dimension of the model increase, the communication cost of parameter synchronization becomes the major obstacle to efficient learning. Communication compression techniques have exhibited great potential as an antidote to accelerate distributed machine learning by mitigating the communication bottleneck. In the rest of the dissertation, we propose compressed residual communication frameworks for both centralized and decentralized optimization and design different algorithms to achieve efficient communication. For centralized optimization, we propose DORE, a modified parallel stochastic gradient descent method with a bidirectional residual compression, to reduce over $95\%$ of the overall communication. Our theoretical analysis demonstrates that the proposed strategy has superior convergence properties for both strongly convex and nonconvex objective functions. Existing works mainly focus on smooth problems and compressing DGD-type algorithms for decentralized optimization. The class of smooth objective functions and the sublinear convergence rate under relatively strong assumptions limit these algorithms' application and practical performance. Motivated by primal-dual algorithms, we propose Prox-LEAD, a linear convergent decentralized algorithm with compression, to tackle strongly convex problems with a nonsmooth regularizer. Our theory describes the coupled dynamics of the inexact primal and dual update as well as compression error without assuming bounded gradients. The superiority of the proposed algorithm is demonstrated through the comparison with state-of-the-art algorithms in terms of convergence complexities and numerical experiments. Our algorithmic framework also generally enlightens the compressed communication on other primal-dual algorithms by reducing the impact of inexact iterations.
Show less
- Title
- Efficient Transfer Learning for Heterogeneous Machine Learning Domains
- Creator
- Zhu, Zhuangdi
- Date
- 2022
- Collection
- Electronic Theses & Dissertations
- Description
-
Recent advances in deep machine learning hinge on a large amount of labeled data. Such heavy dependence on supervision data impedes the broader application of deep learning in more practical scenarios, where data annotation and labeling can be expensive (e.g. high-frequency trading) or even dangerous (e.g. training autonomous-driving models.) Transfer Learning (TL), equivalently referred to as knowledge transfer, is an effective strategy to confront such challenges. TL, by its definition,...
Show moreRecent advances in deep machine learning hinge on a large amount of labeled data. Such heavy dependence on supervision data impedes the broader application of deep learning in more practical scenarios, where data annotation and labeling can be expensive (e.g. high-frequency trading) or even dangerous (e.g. training autonomous-driving models.) Transfer Learning (TL), equivalently referred to as knowledge transfer, is an effective strategy to confront such challenges. TL, by its definition, distills the external knowledge from relevant domains into the target learning domain, hence requiring fewer supervision resources than learning-from-scratch. TL is beneficial for learning tasks for which the supervision data is limited or even unavailable. It is also an essential property to realize Generalized Artificial Intelligence. In this thesis, we propose sample-efficient TL approaches using limited, sometimes unreliable resources. We take a deep look into the setting of Reinforcement Learning (RL) and Supervised Learning, and derive solutions for the two domains respectively. Especially, for RL, we focus on a problem setting called imitation learning, where the supervision from the environment is either non-available or scarcely provided, and the learning agent must transfer knowledge from exterior resources, such as demonstration examples of a previously trained expert, to learn a good policy. For supervised learning, we consider a distributed machine learning scheme called Federated Learning (FL), which is a more challenging scenario than traditional machine learning, since the training data is distributed and non-sharable during the learning process. Under this distributed setting, it is imperative to enable TL among distributed learning clients to reach a satisfiable generalization performance. We prove by both theoretical support and extensive experiments that our proposed algorithms can facilitate the machine learning process with knowledge transfer to achieve higher asymptotic performance, in a principled and more efficient manner than the prior arts.
Show less
- Title
- Efficient and Secure Message Passing for Machine Learning
- Creator
- Liu, Xiaorui
- Date
- 2022
- Collection
- Electronic Theses & Dissertations
- Description
-
Machine learning (ML) techniques have brought revolutionary impact to human society, and they will continue to act as technological innovators in the future. To broaden its impact, it is urgent to solve the emerging and critical challenges in machine learning, such as efficiency and security issues. On the one hand, ML models have become increasingly powerful due to big data and models, but it also brings tremendous challenges in designing efficient optimization algorithms to train the big ML...
Show moreMachine learning (ML) techniques have brought revolutionary impact to human society, and they will continue to act as technological innovators in the future. To broaden its impact, it is urgent to solve the emerging and critical challenges in machine learning, such as efficiency and security issues. On the one hand, ML models have become increasingly powerful due to big data and models, but it also brings tremendous challenges in designing efficient optimization algorithms to train the big ML models from big data. The most effective way for large-scale ML is to parallelize the computation tasks on distributed systems composed of many computational devices. However, in practice, the scalability and efficiency of the systems are greatly limited by information synchronization since the message passing between the devices dominates the total running time. In other words, the major bottleneck lies in the high communication cost between devices, especially when the scale of the system and the models becomes larger while the communication bandwidth is relatively limited. This communication bottleneck often limits the practical speedup of distributed ML systems. On the other hand, recent research has generally revealed that many ML models suffer from security vulnerabilities. In particular, deep learning models can be easily deceived by the unnoticeable perturbations in data. Meanwhile, graph is a kind of prevalent data structure for many real-world data that encodes pairwise relations between entities such as social networks, transportation networks, and chemical molecules. Graph neural networks (GNNs) generalize and extend the representation learning power of traditional deep neural networks (DNNs) from regular grids, such as image, video, and text, to irregular graph-structured data through message passing frameworks. Therefore, many important applications on these data can be treated as computational tasks on graphs, such as recommender systems, social network analysis, traffic prediction, etc. Unfortunately, the vulnerability of deep learning models also translates to GNNs, which raises significant concerns about their applications, especially in safety-critical areas. Therefore, it is critical to design intrinsically secure ML models for graph-structured data.The primary objective of this dissertation is to figure out the solutions to solve these challenges via innovative research and principled methods. In particular, we propose multiple distributed optimization algorithms with efficient message passing to mitigate the communication bottleneck and speed up ML model training in distributed ML systems. We also propose multiple secure message passing schemes as the building blocks of graph neural networks aiming to significantly improve the security and robustness of ML models.
Show less
- Title
- Energy Conservation in Heterogeneous Smartphone Ad Hoc Networks
- Creator
- Mariani, James
- Date
- 2018
- Collection
- Electronic Theses & Dissertations
- Description
-
In recent years mobile computing has been rapidly expanding to the point that there are now more devices than there are people. While once it was common for every household to have one PC, it is now common for every person to have a mobile device. With the increased use of smartphone devices, there has also been an increase in the need for mobile ad hoc networks, in which phones connect directly to each other without the need for an intermediate router. Most modern smart phones are equipped...
Show moreIn recent years mobile computing has been rapidly expanding to the point that there are now more devices than there are people. While once it was common for every household to have one PC, it is now common for every person to have a mobile device. With the increased use of smartphone devices, there has also been an increase in the need for mobile ad hoc networks, in which phones connect directly to each other without the need for an intermediate router. Most modern smart phones are equipped with both Bluetooth and Wifi Direct, where Wifi Direct has a better transmission range and rate and Bluetooth is more energy efficient. However only one or the other is used in a smartphone ad hoc network. We propose a Heterogeneous Smartphone Ad Hoc Network, HSNet, a framework to enable the automatic switching between Wifi Direct and Bluetooth to emphasize minimizing energy consumption while still maintaining an efficient network. We develop an application to evaluate the HSNet framework which shows significant energy savings when utilizing our switching algorithm to send messages by a less energy intensive technology in situations where energy conservation is desired. We discuss additional features of HSNet such as load balancing to help increase the lifetime of the network by more evenly distributing slave nodes among connected master nodes. Finally, we show that the throughput of our system is not affected due to technology switching for most scenarios. Future work of this project includes exploring energy efficient routing as well as simulation/scale testing for larger and more diverse smartphone ad hoc networks.
Show less
- Title
- Evolving Phenotypically Plastic Digital Organisms
- Creator
- Lalejini, Alexander
- Date
- 2021
- Collection
- Electronic Theses & Dissertations
- Description
-
The ability to dynamically respond to cues from the environment is a fundamental feature of most adaptive systems. In biological systems, changes to an organism based on environmental cues is called phenotypic plasticity. Indeed, phenotypic plasticity underlies many of the adaptive traits and developmental patterns found in nature and serves as a key mechanism for responding to spatially or temporally variable environments. Most computer programs require phenotypic plasticity, as they must...
Show moreThe ability to dynamically respond to cues from the environment is a fundamental feature of most adaptive systems. In biological systems, changes to an organism based on environmental cues is called phenotypic plasticity. Indeed, phenotypic plasticity underlies many of the adaptive traits and developmental patterns found in nature and serves as a key mechanism for responding to spatially or temporally variable environments. Most computer programs require phenotypic plasticity, as they must respond dynamically to stimuli such as user input, sensor data, et cetera. As such, phenotypic plasticity also has practical applications in genetic programming, wherein we apply the natural principles of evolution to automatically synthesize computer programs rather than writing them by hand. In this dissertation, I achieve two synergistic aims: (1) I use populations of self-replicating computer programs (digital organisms) to empirically study the conditions under which adaptive phenotypic plasticity evolves and how its evolution shapes subsequent evolutionary outcomes; and (2) I transfer insights from biology to develop novel genetic programming techniques in order to evolve more responsive (i.e., phenotypically plastic) computer programs. First, I illustrate the importance of mutation rate, environmental change, and partially-plastic building blocks for the evolution of adaptive plasticity. Next, I show that adaptive phenotypic plasticity stabilizes populations against environmental change, allowing them to more easily retain novel adaptive traits. Finally, I improve our ability to evolve phenotypically plastic computer programs with three novel genetic programming techniques: (1) SignalGP, which provides mechanisms to control code expression based on environmental cues, (2) tag-based genetic regulation to adjust code expression based on current context, and (3) tag-accessed memory to provide more dynamic mechanisms for storing data.
Show less
- Title
- Example-Based Parameterization of Linear Blend Skinning for Skinning Decomposition (EP-LBS
- Creator
- Hopkins, Kayra M.
- Date
- 2017
- Collection
- Electronic Theses & Dissertations
- Description
-
This thesis presents Example-based Parameterization of Linear Blend Skinning for Skinning Decomposition (EP-LBS), a unified and robust method for using example data to simplify and improve the development and parameterization of high quality 3D models for animation. Animation and three-dimensional (3D) computer graphics have quickly become a popular medium for education, entertainment and scientific simulation. In addition to film, gaming and research applications, recent advancements in...
Show moreThis thesis presents Example-based Parameterization of Linear Blend Skinning for Skinning Decomposition (EP-LBS), a unified and robust method for using example data to simplify and improve the development and parameterization of high quality 3D models for animation. Animation and three-dimensional (3D) computer graphics have quickly become a popular medium for education, entertainment and scientific simulation. In addition to film, gaming and research applications, recent advancements in augmented reality (AR) and virtual reality (VR) are driving additional demand for 3D content. However, the success of graphics in these arenas depends greatly on the efficiency of model creation and the realism of the animation or 3D image.A common method for figure animation is skeletal animation using linear blend skinning (LBS). In this method, vertices are deformed based on a weighted sum of displacements due to an embedded skeleton. This research addresses the problem that LBS animation parameter computation, including determining the rig (the skeletal structure), identifying influence bones (which bones influence which vertices), and assigning skinning weights (amounts of influence a bone has on a vertex), is a tedious process that is difficult to get right. Even the most skilled animators must work tirelessly to design an effective character model and often find themselves repeatedly correcting flaws in the parameterization. Significant research, including the use of example-data, has focused on simplifying and automating individual components of the LBS deformation process and increasing the quality of resulting animations. However, constraints on LBS animation parameters makes automated analytic computation of the values equally as challenging as traditional 3D animation methods. Skinning decomposition is one such method of computing LBS animation LBS parameters from example data. Skinning decomposition challenges include constraint adherence and computationally efficient determination of LBS parameters.The EP-LBS method presented in this thesis utilizes example data as input to a least-squares non-linear optimization process. Given a model as a set of example poses captured from scan data or manually created, EP-LBS institutes a single optimization equation that allows for simultaneous computation of all animation parameters for the model. An iterative clustering methodology is used to construct an initial parameterization estimate for this model, which is then subjected to non-linear optimization to improve the fitting to the example data. Simultaneous optimization of weights and joint transformations is complicated by a wide range of differing constraints and parameter interdependencies. To address interdependent and conflicting constraints, parameter mapping solutions are presented that map the constraints to an alternative domain more suitable for nonlinear minimization. The presented research is a comprehensive, data-driven solution for automatically determining skeletal structure, influence bones and skinning weights from a set of example data. Results are presented for a range of models that demonstrate the effectiveness of the method.
Show less
- Title
- Face Anti-Spoofing : Detection, Generalization, and Visualization
- Creator
- Liu, Yaojie
- Date
- 2021
- Collection
- Electronic Theses & Dissertations
- Description
-
Face anti-spoofing is the process of distinguishing genuine faces and face presentation attacks: attackers presenting spoofing faces (e.g. photograph, digital screen, and mask) to the face recognition system and attempting to be authenticated as the genuine user. In recent years, face anti-spoofing has brought increasing attention to the vision community as it is a crucial step to prevent face recognition systems from a security breach. Previous approaches formulate face anti-spoofing as a...
Show moreFace anti-spoofing is the process of distinguishing genuine faces and face presentation attacks: attackers presenting spoofing faces (e.g. photograph, digital screen, and mask) to the face recognition system and attempting to be authenticated as the genuine user. In recent years, face anti-spoofing has brought increasing attention to the vision community as it is a crucial step to prevent face recognition systems from a security breach. Previous approaches formulate face anti-spoofing as a binary classification problem, and many of them struggle to generalize to different conditions(such as pose, lighting, expressions, camera sensors, and unknown spoof types). Moreover, those methods work as a black box and cannot provide interpretation or visualization to their decision. To address those challenges, we investigate face anti-spoofing in 3 stages: detection, generalization and visualization. In the detection stage, we learn a CNN-RNN model to estimate auxiliary tasks of face depth and rPPG signals estimation, which can bring additional knowledge for the spoof detection. In the generalization stage, we investigate the detection of unknown spoof attacks and propose a novel Deep Tree Network (DTN) to well represent the unknown spoof attacks. In the visualization stage, we find “spoof trace, the subtle image pattern in spoof faces (e.g., color distortion, 3D mask edge, and Moire pattern), is effective to explain why a spoof is a spoof. We provide a proper physical modeling of the spoof traces and design a generative model to disentangle the spoof traces from input faces. In addition, we also show that a proper physical modeling can benefit other face problems, such as face shadow detection and removal. A proper shadow modeling can not only detect the shadow region effectively, but also remove the shadow in a visually plausible manner.
Show less
- Title
- Face Recognition : Representation, Intrinsic Dimensionality, Capacity, and Demographic Bias
- Creator
- Gong, Sixue
- Date
- 2021
- Collection
- Electronic Theses & Dissertations
- Description
-
Face recognition is a widely adopted technology with numerous applications, such as mobile phone unlock, mobile payment, surveillance, social media and law enforcement. There has been tremendous progress in enhancing the accuracy of face recognition systems over the past few decades, much of which can be attributed to deep learning. Despite this progress, several fundamental problems in face recognition still remain unsolved. These problems include finding a salient representation, estimating...
Show moreFace recognition is a widely adopted technology with numerous applications, such as mobile phone unlock, mobile payment, surveillance, social media and law enforcement. There has been tremendous progress in enhancing the accuracy of face recognition systems over the past few decades, much of which can be attributed to deep learning. Despite this progress, several fundamental problems in face recognition still remain unsolved. These problems include finding a salient representation, estimating intrinsic dimensionality, representation capacity, and demographic bias. With growing applications of face recognition, the need for an accurate, robust, compact and fair representation is evident.In this thesis, we first develop algorithms to obtain practical estimates of intrinsic dimensionality of face representations, and propose a new dimensionality reduction method to project feature vectors from ambient space to intrinsic space. Based on the study in intrinsic dimensionality, we then estimate capacity of face representation, casting the face capacity estimation problem under the information theoretic framework of capacity of a Gaussian noise channel. Numerical experiments on unconstrained faces (IJB-C) provide a capacity upper bound of 27,000 for FaceNet and 84,000 for SphereFace representation at 1% FAR. In the second part of the thesis, we address the demographic bias problem in face recognition systems where errors are lower on certain cohorts belonging to specific demographic groups. We propose two de-biasing frameworks that extract feature representations to improve fairness in face recognition. Experiments on benchmark face datasets (RFW, LFW, IJB-A, and IJB-C) show that our approaches are able to mitigate face recognition bias on various demographic groups (biasness drops from 6.83 to 5.07) as well as maintain the competitive performance (i.e., 99.75% on LFW, and 93.70% TAR @ 0.1% FAR on IJB-C). Lastly, we explore the global distribution of deep face representations derived from correlations between image samples of within-class and cross-class to enhance the discriminativeness of face representation of each identity in the embedding space. Our new approach to face representation achieves state-of-the-art performance for both verification and identification tasks on benchmark datasets (99.78% on LFW, 93.40% on CPLFW, 98.41% on CFP-FP, 96.2% TAR @ 0.01% FAR and 95.3% Rank-1 accuracy on IJB-C). Since, the primary techniques we employ in this dissertation are not specific to faces only, we believe our research can be extended to other problems in computer vision, for example, general image classification and representation learning.
Show less
- Title
- Fast edit distance calculation methods for NGS sequence similarity
- Creator
- Islam, A. K. M. Tauhidul
- Date
- 2020
- Collection
- Electronic Theses & Dissertations
- Description
-
Sequence fragments generated from targeted regions of phylogenetic marker genes provide valuable insight in identifying and classifying organisms and inferring taxonomic hierarchies. In recent years, significant development in targeted gene fragment sequencing through Next Generation Sequencing (NGS) technologies has increased the necessity of efficient sequence similarity computation methods for very large numbers of pairs of NGS sequences.The edit distance has been widely used to determine...
Show moreSequence fragments generated from targeted regions of phylogenetic marker genes provide valuable insight in identifying and classifying organisms and inferring taxonomic hierarchies. In recent years, significant development in targeted gene fragment sequencing through Next Generation Sequencing (NGS) technologies has increased the necessity of efficient sequence similarity computation methods for very large numbers of pairs of NGS sequences.The edit distance has been widely used to determine the dissimilarity between pairs of strings. All the known methods for the edit distance calculation run in near quadratic time with respect to string lengths, and it may take days or weeks to compute distances between such large numbers of pairs of NGS sequences. To solve the performance bottleneck problem, faster edit distance approximation and bounded edit distance calculation methods have been proposed. Despite these efforts, the existing edit distance calculation methods are not fast enough when computing larger numbers of pairs of NGS sequences. In order to further reduce the computation time, many NGS sequence similarity methods have been proposed using matching k-mers. These methods extract all possible k-mers from NGS sequences and compare similarity between pairs of sequences based on the shared k-mers. However, these methods reduce the computation time at the cost accuracy.In this dissertation, our goal is to compute NGS sequence similarity using edit distance based methods while reducing the computation time. We propose a few edit distance prediction methods using dataset independent reference sequences that are distant from each other. These reference sequences convert sequences in datasets into feature vectors by computing edit distances between the sequence and each of the reference sequences. Given sequences A, B and a reference sequence r, the edit distance, ed(A.B) 2265 (ed(A, r) 0303ed(B, r)). Since each reference sequence is significantly different from each other, with sufficiently large number of reference sequences and high similarity threshold, the differences of edit distances of A and B with respect to the reference sequences are close to the ed(A,B). Using this property, we predict edit distances in the vector space based on the Euclidean distances and the Chebyshev distances. Further, we develop a small set of deterministically generated reference sequences with maximum distance between each of them to predict higher edit distances more efficiently. This method predicts edit distances between corresponding sub-sequences separately and then merges the partial distances to predict the edit distances between the entire sequences. The computation complexity of this method is linear with respect to sequence length. The proposed edit distance prediction methods are significantly fast while achieving very good accuracy for high similarity thresholds. We have also shown the effectiveness of these methods on agglomerative hierarchical clustering.We also propose an efficient bounded exact edit distance calculation method using the trace [1]. For a given edit distance threshold d, only letters up to d positions apart can be part of an edit operation. Hence, we generate pairs of sub-sequences up to length difference d so that no edit operation is spilled over to the adjacent pairs of sub-sequences. Then we compute the trace cost in such a way that the number of matching letters between the sub-sequences are maximized. This technique does not guarantee locally optimal edit distance, however, it guarantees globally optimal edit distance between the entire sequences for distance up to d. The bounded exact edit distance calculation method is an order of magnitude faster than that of the dynamic programming edit distance calculation method.
Show less
- Title
- Finding optimized bounding boxes of polytopes in d-dimensional space and their properties in k-dimensional projections
- Creator
- Shahid, Salman (Of Michigan State University)
- Date
- 2014
- Collection
- Electronic Theses & Dissertations
- Description
-
Using minimal bounding boxes to encapsulate or approximate a set of points in d-dimensional space is a non-trivial problem that has applications in a variety of fields including collision detection, object rendering, high dimensional databases and statistical analysis to name a few. While a significant amount of work has been done on the three dimensional variant of the problem (i.e. finding the minimum volume bounding box of a set of points in three dimensions), it is difficult to find a...
Show moreUsing minimal bounding boxes to encapsulate or approximate a set of points in d-dimensional space is a non-trivial problem that has applications in a variety of fields including collision detection, object rendering, high dimensional databases and statistical analysis to name a few. While a significant amount of work has been done on the three dimensional variant of the problem (i.e. finding the minimum volume bounding box of a set of points in three dimensions), it is difficult to find a simple method to do the same for higher dimensions. Even in three dimensions existing methods suffer from either high time complexity or suboptimal results with a speed up in execution time. In this thesis we present a new approach to find the optimized minimum bounding boxes of a set of points defining convex polytopes in d-dimensional space. The solution also gives the optimal bounding box in three dimensions with a much simpler implementation while significantly speeding up the execution time for a large number of vertices. The basis of the proposed approach is a series of unique properties of the k-dimensional projections that are leveraged into an algorithm. This algorithm works by constructing the convex hulls of a given set of points and optimizing the projections of those hulls in two dimensional space using the new concept of Simultaneous Local Optimal. We show that the proposed algorithm provides significantly better performances than those of the current state of the art approach on the basis of time and accuracy. To illustrate the importance of the result in terms of a real world application, the optimized bounding box algorithm is used to develop a method for carrying out range queries in high dimensional databases. This method uses data transformation techniques in conjunction with a set of heuristics to provide significant performance improvement.
Show less
- Title
- GENERATIVE SIGNAL PROCESSING THROUGH MULTILAYER MULTISCALE WAVELET MODELS
- Creator
- He, Jieqian
- Date
- 2021
- Collection
- Electronic Theses & Dissertations
- Description
-
Wavelet analysis and deep learning are two popular fields for signal processing. The scattering transform from wavelet analysis is a recently proposed mathematical model for convolution neural networks. Signals with repeated patterns can be analyzed using the statistics from such models. Specifically, signals from certain classes can be recovered from related statistics. We first focus on recovering 1D deterministic dirac signals from multiscale statistics. We prove a dirac signal can be...
Show moreWavelet analysis and deep learning are two popular fields for signal processing. The scattering transform from wavelet analysis is a recently proposed mathematical model for convolution neural networks. Signals with repeated patterns can be analyzed using the statistics from such models. Specifically, signals from certain classes can be recovered from related statistics. We first focus on recovering 1D deterministic dirac signals from multiscale statistics. We prove a dirac signal can be recovered from multiscale statistics up to a translation and reflection. Then we switch to a stochastic version, modeled using Poisson point processes, and prove wavelet statistics at small scales capture the intensity parameter of Poisson point processes. We also design a scattering generative adversarial network (GAN) to generate new Poisson point samples from statistics of multiple given samples. Next we consider texture images. We successfully synthesize new textures given one sample from the texture class through multiscale, multilayer wavelet models. Finally, we analyze and prove why the multiscale multilayer model is essential for signal recovery, especially natural texture images.
Show less
- Title
- Gender-related effects of advanced placement computer science courses on self-efficacy, belongingness, and persistence
- Creator
- Good, Jonathon Andrew
- Date
- 2018
- Collection
- Electronic Theses & Dissertations
- Description
-
The underrepresentation of women in computer science has been a concern of educators for multiple decades. The low representation of women in the computer science is a pattern from K-12 schools through the university level and profession. One of the purposes of the introduction of the Advanced Placement Computer Science Principles (APCS-P) course in 2016 was to help broaden participation in computer science at the high school level. The design of APCS-P allowed teachers to present computer...
Show moreThe underrepresentation of women in computer science has been a concern of educators for multiple decades. The low representation of women in the computer science is a pattern from K-12 schools through the university level and profession. One of the purposes of the introduction of the Advanced Placement Computer Science Principles (APCS-P) course in 2016 was to help broaden participation in computer science at the high school level. The design of APCS-P allowed teachers to present computer science from a broad perspective, allowing students to pursue problems of personal significance, and allowing for computing projects to take a variety of forms. The nationwide enrollment statistics for Advanced Placement Computer Science Principles in 2017 had a higher proportion of female students (30.7%) than Advanced Placement Computer Science A (23.6%) courses. However, it is unknown to what degree enrollment in these courses was related to students’ plans to enroll in future computer science courses. This correlational study examined how students’ enrollment in Advanced Placement Computer Science courses, along with student gender, predicted students’ sense of computing self-efficacy, belongingness, and expected persistence in computer science. A nationwide sample of 263 students from 10 APCS-P and 10 APCS-A courses participated in the study. Students completed pre and post surveys at the beginning and end of their Fall 2017 semester regarding their computing self-efficacy, belongingness, and plans to continue in computer science studies. Using hierarchical linear modeling analysis due to the nested nature of the data within class sections, the researcher found that the APCS course type was not predictive of self-efficacy, belongingness, or expectations to persist in computer science. The results suggested that female students’ self-efficacy declined over the course of the study. However, gender was not predictive of belongingness or expectations to persist in computer science. Students were found to have entered into both courses with high a sense of self-efficacy, belongingness, and expectation to persist in computer science.The results from this suggests that students enrolled in both Advanced Placement Computer Science courses are already likely to pursue computer science. I also found that the type of APCS course in which students enroll does not relate to students’ interest in computer science. This suggests that educators should look beyond AP courses as a method of exposing students to computer science, possibly through efforts such as computational thinking and cross-curricular uses of computer science concepts and practices. Educators and administrators should also continue to examine whether there are structural biases in how students are directed to computer science courses. As for the drop in self-efficacy related to gender, this in alignment with previous research suggesting that educators should carefully scaffold students’ initial experiences in the course to not negatively influence their self-efficacy. Further research should examine how specific pedagogical practices could influence students’ persistence, as the designation and curriculum of APCS-A or APCS-P alone may not capture the myriad of ways in which teachers may be addressing gender inequity in their classrooms. Research can also examine how student interest in computer science is affected at an earlier age, as the APCS courses may be reaching students after they have already formed their opinions about computer science as a field.
Show less
- Title
- High-precision and Personalized Wearable Sensing Systems for Healthcare Applications
- Creator
- Tu, Linlin
- Date
- 2022
- Collection
- Electronic Theses & Dissertations
- Description
-
The cyber-physical system (CPS) has been discussed and studied extensively since 2010. It provides various solutions for monitoring the user's physical and psychological health states, enhancing the user's experience, and improving the lifestyle. A variety of mobile internet devices with built-in sensors, such as accelerators, cameras, PPG sensors, pressure sensors, and the microphone, can be leveraged to build mobile cyber-physical applications that collected sensing data from the real world...
Show moreThe cyber-physical system (CPS) has been discussed and studied extensively since 2010. It provides various solutions for monitoring the user's physical and psychological health states, enhancing the user's experience, and improving the lifestyle. A variety of mobile internet devices with built-in sensors, such as accelerators, cameras, PPG sensors, pressure sensors, and the microphone, can be leveraged to build mobile cyber-physical applications that collected sensing data from the real world, had data processed, communicated to the internet services and transformed into behavioral and physiological models. The detected results can be used as feedback to help the user understand his/her behavior, improve the lifestyle, or avoid danger. They can also be delivered to therapists to facilitate their diagnose. Designing CPS for health monitoring is challenging due to multiple factors. First of all, high estimation accuracy is necessary for health monitoring. However, some systems suffer irregular noise. For example, PPG sensors for cardiac health state monitoring are extremely vulnerable to motion noise. Second, to include human in the loop, health monitoring systems are required to be user-friendly. However, some systems involve cumbersome equipment for a long time of data collection, which is not feasible for daily monitoring. Most importantly, large-scale high-level health-related monitoring systems, such as the systems for human activity recognition, require high accuracy and communication efficiency. However, with users' raw data uploading to the server, centralized learning fails to protect users' private information and is communication-inefficient. The research introduced in this dissertation addressed the above three significant challenges in developing health-related monitoring systems. We build a lightweight system for accurate heart rate measurement during exercise, design a smart in-home breathing training system with bio-Feedback via virtual reality (VR) game, and propose federated learning via dynamic layer sharing for human activity recognition.
Show less
- Title
- I AM DOING MORE THAN CODING : A QUALITATIVE STUDY OF BLACK WOMEN HBCU UNDERGRADUATES’ PERSISTENCE IN COMPUTING
- Creator
- Benton, Amber V.
- Date
- 2020
- Collection
- Electronic Theses & Dissertations
- Description
-
The purpose of my study is to explore why and how Black women undergraduates at historically Black colleges and universities (HBCUs) persist in computing. By centering the experiences of Black women undergraduates and their stories, this dissertation expands traditional, dominant ways of understanding student persistence in higher education. Critical Race Feminism (CRF) was applied as a conceptual framework to the stories of 11 Black women undergraduates in computing and drew on the small...
Show moreThe purpose of my study is to explore why and how Black women undergraduates at historically Black colleges and universities (HBCUs) persist in computing. By centering the experiences of Black women undergraduates and their stories, this dissertation expands traditional, dominant ways of understanding student persistence in higher education. Critical Race Feminism (CRF) was applied as a conceptual framework to the stories of 11 Black women undergraduates in computing and drew on the small stories qualitative approach to examine the day-to-day experiences of Black women undergraduates at HBCUs as they persisted in their computing degree programs. The findings suggest that: (a) gender underrepresentation in computing affects Black women’s experiences, (b) computing culture at HBCUs directly affect Black women in computing, (c) Black women need access to resources and opportunities to persist in computing, (d) computing-related internships are beneficial professional opportunities but are also sites of gendered racism for Black women, (e) connectedness between Black people is innate but also needs to be fostered, (f) Black women want to engage in computing that contributes to social impact and community uplift, and (g) science identity is not a primary identity for Black women in computing. This paper also argues that disciplinary focused efforts contribute to the persistence of Black women in computing.
Show less
- Title
- IMPROVED DETECTION AND MANAGEMENT OF PHYTOPHTHORA SOJAE
- Creator
- McCoy, Austin Glenn
- Date
- 2022
- Collection
- Electronic Theses & Dissertations
- Description
-
Phytophthora spp. cause root and stem rots, leaf blights and fruit rots on agricultural and economically important plant species. Symptoms of Phytophthora infected plants, particularly root rots, can be difficult to distinguish from other oomycete and fungal pathogens and often result in devastating losses. Phytophthora spp. can lie dormant for many years in the oospore stage, making long-term management of these diseases difficult. Phytophthora sojae is an important and prevalent pathogen of...
Show morePhytophthora spp. cause root and stem rots, leaf blights and fruit rots on agricultural and economically important plant species. Symptoms of Phytophthora infected plants, particularly root rots, can be difficult to distinguish from other oomycete and fungal pathogens and often result in devastating losses. Phytophthora spp. can lie dormant for many years in the oospore stage, making long-term management of these diseases difficult. Phytophthora sojae is an important and prevalent pathogen of soybean (Glycine max L.) worldwide, causing Phytophthora stem and root rot (PRR). PRR disease management during the growing season relies on an integrated pest management approach using a combination of host resistance, chemical compounds (fungicides; oomicides) and cultural practices for successful management. Therefore, this dissertation research focuses on improving the detection and management recommendations for Phytophthora sojae. In Chapter 1 I provide background and a review of the current literature on Phytophthora sojae management, including genetic resistance, chemical control compounds (fungicides; oomicides) and cultural practices used to mitigate losses to PRR. In my second chapter I validate the sensitivity and specificity of a preformulated Recombinase Polymerase Amplification assay for Phytophthora spp. This assay needs no refrigeration, does not require extensive DNA isolation, can be used in the field, and different qPCR platforms could reliably detect down to 3.3-330.0 pg of Phytophthora spp. DNA within plant tissue in under 30 minutes. Based on the limited reagents needed, ease of use, and reliability, this assay would be of benefit to diagnostic labs and inspectors monitoring regulated and non-regulated Phytophthora spp. Next, I transitioned the Habgood-Gilmour Spreadsheet (‘HaGiS’) from Microsoft Excel format to the subsequent R package ‘hagis’ and improved upon the analyses readily available to compare pathotypes from different populations of P. sojae (Chapter 3; ‘hagis’ beta-diversity). I then implemented the R package ‘hagis’ in my own P. sojae pathotype and fungicide sensitivity survey in the state of Michigan, identifying effective resistance genes and seed treatment compounds for the management of PRR. This study identified a loss of Rps1c and Rps1k, the two most widely plant Phytophthora sojae resistance genes, as viable management tools in Michigan and an increase in pathotype complexity, as compared to a survey conducted twenty years ago in Michigan (Chapter 4). In Chapter 5 I led a multi-state integrated pest management field trial that was performed in Michigan, Indiana, and Minnesota to study the effects of partial resistance and seed treatments with or without ethaboxam and metalaxyl on soybean stand, plant dry weights, and final yields under P. sojae pressure. This study found that oomicide treated seed protects stand across three locations in the Midwest, but the response of soybean varieties based on seed treatment, was variety and year specific. Significant yield benefits from using oomicide treated seed were only observed in one location and year. The effects of partial resistance were inconclusive and highlighted the need for a more informative and reliable rating system for soybean varieties partial resistance to P. sojae. Finally, in Chapter 6 I present conclusions and impacts on the studies presented in this dissertation. Overall, the studies presented provide an improvement to the detection, virulence data analysis, and integrated pest management recommendations for Phytophthora sojae.
Show less
- Title
- IMPROVING THE PREDICTABILITY OF HYDROLOGIC INDICES IN ECOHYDROLOGICAL APPLICATIONS
- Creator
- Hernandez Suarez, Juan Sebastian
- Date
- 2021
- Collection
- Electronic Theses & Dissertations
- Description
-
Monitoring freshwater ecosystems allow us to better understand their overall ecohydrological condition within large and diverse watersheds. Due to the significant costs associated with biological monitoring, hydrological modeling is widely used to calculate ecologically relevant hydrologic indices (ERHIs) for stream health characterization in locations with lacking data. However, the reliability and applicability of these models within ecohydrological frameworks are major concerns....
Show moreMonitoring freshwater ecosystems allow us to better understand their overall ecohydrological condition within large and diverse watersheds. Due to the significant costs associated with biological monitoring, hydrological modeling is widely used to calculate ecologically relevant hydrologic indices (ERHIs) for stream health characterization in locations with lacking data. However, the reliability and applicability of these models within ecohydrological frameworks are major concerns. Particularly, hydrologic modeling’s ability to predict ERHIs is limited, especially when calibrating models by optimizing a single objective function or selecting a single optimal solution. The goal of this research was to develop model calibration strategies based on multi-objective optimization and Bayesian parameter estimation to improve the predictability of ERHIs and the overall representation of the streamflow regime. The research objectives were to (1) evaluate the predictions of ERHIs using different calibration techniques based on widely used performance metrics, (2) develop performance and signature-based calibration strategies explicitly constraining or targeting ERHIs, and (3) quantify the modeling uncertainty of ERHIs using the results from multi-objective model calibration and Bayesian inference. The developed strategies were tested in an agriculture-dominated watershed in Michigan, US, using the Unified Non-dominated Sorting Algorithm III (U-NSGA-III) for multi-objective calibration and the Soil and Water Assessment Tool (SWAT) for hydrological modeling. Performance-based calibration used objective functions based on metrics calculated on streamflow time series, whereas signature-based calibration used ERHIs values for objective functions’ formulation. For uncertainty quantification purposes, a lumped error model accounting for heteroscedasticity and autocorrelation was considered and the multiple-try Differential Evolution Adaptive Metropolis (ZS) (MT-DREAM(ZS)) algorithm was implemented for Markov Chain Monte Carlo (MCMC) sampling. In relation to the first objective, the results showed that using different sets of solutions instead of a single optimal introduces more flexibility in the predictability of various ERHIs. Regarding the second objective, both performance-based and signature-based model calibration strategies were successful in representing most of the selected ERHIs within a +/-30% relative error acceptability threshold while yielding consistent runoff predictions. The performance-based strategy was preferred since it showed a lower dispersion of near-optimal Pareto solutions when representing the selected indices and other hydrologic signatures based on water balance and Flow Duration Curve characteristics. Finally, regarding the third objective, using near-optimal Pareto parameter distributions as prior knowledge in Bayesian calibration generally reduced both the bias and variability ranges in ERHIs prediction. In addition, there was no significant loss in the reliability of streamflow predictions when targeting ERHIs, while improving precision and reducing the bias. Moreover, parametric uncertainty drastically shrank when linking multi-objective calibration and Bayesian parameter estimation. Still, the representation of low flow magnitude and timing, rate of change, and duration and frequency of extreme flows were limited. These limitations, expressed in terms of bias and interannual variability, were mainly attributed to the hydrological model’s structural inadequacies. Therefore, future research should involve revising hydrological models to better describe the ecohydrological characteristics of riverine systems.
Show less
- Title
- INTERPRETABLE ARTIFICIAL INTELLIGENCE USING NONLINEAR DECISION TREES
- Creator
- Dhebar, Yashesh Deepakkumar
- Date
- 2020
- Collection
- Electronic Theses & Dissertations
- Description
-
The recent times have observed a massive application of artificial intelligence (AI) to automate tasks across various domains. The back-end mechanism with which automation occurs is generally black-box. Some of the popular black-box AI methods used to solve an automation task include decision trees (DT), support vector machines (SVM), artificial neural networks (ANN), etc. In the past several years, these black-box AI methods have shown promising performance and have been widely applied and...
Show moreThe recent times have observed a massive application of artificial intelligence (AI) to automate tasks across various domains. The back-end mechanism with which automation occurs is generally black-box. Some of the popular black-box AI methods used to solve an automation task include decision trees (DT), support vector machines (SVM), artificial neural networks (ANN), etc. In the past several years, these black-box AI methods have shown promising performance and have been widely applied and researched across industries and academia. While the black-box AI models have been shown to achieve high performance, the inherent mechanism with which a decision is made is hard to comprehend. This lack of interpretability and transparency of black-box AI methods makes them less trustworthy. In addition to this, the black-box AI models lack in their ability to provide valuable insights regarding the task at hand. Following these limitations of black-box AI models, a natural research direction of developing interpretable and explainable AI models has emerged and has gained an active attention in the machine learning and AI community in the past three years. In this dissertation, we will be focusing on interpretable AI solutions which are being currently developed at the Computational Optimization and Innovation Laboratory (COIN Lab) at Michigan State University. We propose a nonlinear decision tree (NLDT) based framework to produce transparent AI solutions for automation tasks related to classification and control. The recent advancement in non-linear optimization enables us to efficiently derive interpretable AI solutions for various automation tasks. The interpretable and transparent AI models induced using customized optimization techniques show similar or better performance as compared to complex black-box AI models across most of the benchmarks. The results are promising and provide directions to launch future studies in developing efficient transparent AI models.
Show less
- Title
- Investigating the Role of Sensor Based Technologies to Support Domestic Activities in Sub-Saharan Africa
- Creator
- Chidziwisano, George Hope
- Date
- 2022
- Collection
- Electronic Theses & Dissertations
- Description
-
In sub-Saharan Africa (SSA), homes face various challenges including insecurity, unreliable power supply, and extreme weather conditions. While the use of sensor-based technologies is increasing in industrialized countries, it is unclear how they can be used to support domestic activities in SSA. The availability of low-cost sensors and the widespread adoption of mobile phones presents an opportunity to collect real-time data and utilize proactive methods to monitor these challenges. This...
Show moreIn sub-Saharan Africa (SSA), homes face various challenges including insecurity, unreliable power supply, and extreme weather conditions. While the use of sensor-based technologies is increasing in industrialized countries, it is unclear how they can be used to support domestic activities in SSA. The availability of low-cost sensors and the widespread adoption of mobile phones presents an opportunity to collect real-time data and utilize proactive methods to monitor these challenges. This dissertation presents three studies that build upon each other to explore the role of sensor-based technologies in SSA. I used a technology probes method to develop three sensor-based systems that support domestic security (M-Kulinda), power blackout monitoring (GridAlert) and poultry farming (NkhukuApp). I deployed M-Kulinda in 20 Kenyan homes, GridAlert in 18 Kenyan homes, and NkhukuProbe in 15 Malawian home-based chicken coops for one month. I used interview, observation, diary, and data logging methods to understand participants’ experiences using the probes. Findings from these studies suggest that people in Kenya and Malawi want to incorporate sensor-based technologies into their everyday activities, and they quickly find unexpected ways to use them. Participants’ interactions with the probes prompted detailed reflections about how they would integrate sensor-based technologies in their homes (e.g., monitoring non-digital tools). These reflections are useful for motivating new design concepts in HCI. I use these findings to motivate a discussion about unexplored areas that could benefit from sensor-based technologies. Further, I discuss recommendations for designing sensor-based technologies that support activities in some Kenyan and Malawian homes. This research contributes to HCI by providing design implications for sensor-based applications in Kenyan and Malawian homes, employing a technology probes method in a non-traditional context, and developing prototypes of three novel systems.
Show less