You are here
Search results
(1 - 20 of 46)
Pages
- Title
- Variational Bayes inference of Ising models and their applications
- Creator
- Kim, Minwoo
- Date
- 2022
- Collection
- Electronic Theses & Dissertations
- Description
-
Ising models originated in statistical physics have been widely used in modeling spatialdata and computer vision problems. However, statistical inference of this model and its application to many practical fields remain challenging due to intractable nature of the normalizing constant in the likelihood. This dissertation consists of two main themes, (1) parameter estimation of Ising model and (2) structured variable selection based on the Ising model using variational Bayes (VB).In Chapter 1,...
Show moreIsing models originated in statistical physics have been widely used in modeling spatialdata and computer vision problems. However, statistical inference of this model and its application to many practical fields remain challenging due to intractable nature of the normalizing constant in the likelihood. This dissertation consists of two main themes, (1) parameter estimation of Ising model and (2) structured variable selection based on the Ising model using variational Bayes (VB).In Chapter 1, we review the background, research questions and development of Isingmodel, variational Bayes, and other statistical concepts. An Ising model basically deal with a binary random vector in which each component is dependent on its neighbors. There exist various versions of Ising model depending on parameterization and neighboring structure. In Chapter 2, with two-parameter Ising model, we describe a novel procedure for the pa- rameter estimation based on VB which is computationally efficient and accurate compared to existing methods. Traditional pseudo maximum likelihood estimate (PMLE) can pro- vide accurate results only for smaller number of neighbors. A Bayesian approach based on Markov chain Monte Carlo (MCMC) performs better even with a large number of neighbors. Computational costs of MCMC, however, are quite expensive in terms of time. Accordingly, we propose a VB method with two variational families, mean-field (MF) Gaussian family and bivariate normal (BN) family. Extensive simulation studies validate the efficacy of the families. Using our VB methods, computing times are remarkably decreased without dete- rioration in performance accuracy, or in some scenarios we get much more accurate output. In addition, we demonstrates theoretical properties of the proposed VB method under MF family. The main theoretical contribution of our work lies in establishing the consistency of the variational posterior for the Ising model with the true likelihood replaced by the pseudo- likelihood. Under certain conditions, we first derive the rates at which the true posterior based on the pseudo-likelihood concentrates around the εn- shrinking neighborhoods of the true parameters. With a suitable bound on the Kullback-Leibler distance between the true and the variational posterior, we next establish the rate of contraction for the variational pos- terior and demonstrate that the variational posterior also concentrates around εn-shrinking neighborhoods of the true parameter.In Chapter 3, we propose a Bayesian variable selection technique for a regression setupin which the regression coefficients hold structural dependency. We employ spike and slab priors on the regression coefficients as follows: (i) In order to capture the intrinsic structure, we first consider Ising prior on latent binary variables. If a latent variable takes one, the corresponding regression coefficient is active, otherwise, it is inactive. (ii) Employing spike and slab prior, we put Gaussian priors (slab) on the active coefficients and inactive coefficients will be zeros with probability one (spike).
Show less
- Title
- Towards Robust and Secure Face Recognition : Defense Against Physical and Digital Attacks
- Creator
- Deb, Debayan
- Date
- 2021
- Collection
- Electronic Theses & Dissertations
- Description
-
The accuracy, usability, and touchless acquisition of state-of-the-art automated face recognition systems (AFR) have led to their ubiquitous adoption in a plethora of domains, including mobile phone unlock, access control systems, and payment services. Despite impressive recognition performance, prevailing AFR systems remain vulnerable to the growing threat of face attacks which can be launched in both physical and digital domains. Face attacks can be broadly classified into three attack...
Show moreThe accuracy, usability, and touchless acquisition of state-of-the-art automated face recognition systems (AFR) have led to their ubiquitous adoption in a plethora of domains, including mobile phone unlock, access control systems, and payment services. Despite impressive recognition performance, prevailing AFR systems remain vulnerable to the growing threat of face attacks which can be launched in both physical and digital domains. Face attacks can be broadly classified into three attack categories: (i) Spoof attacks: artifacts in the physical domain (e.g., 3D masks, eye glasses, replaying videos), (ii) Adversarial attacks: imperceptible noises added to probes for evading AFR systems, and (iii) Digital manipulation attacks: entirely or partially modified photo-realistic faces using generative models. Each of these categories is composed of different attack types. For example, each spoof medium, e.g., 3D mask and makeup, constitutes one attack type. Likewise, in adversarial and digital manipulation attacks, each attack model, designed by unique objectives and losses, may be considered as one attack type. Thus, the attack categories and types form a 2-layer tree structure encompassing the diverse attacks. Such a tree will inevitably grow in the future. Given the growing dissemination of ``fake news” and "deepfakes", the research community and social media platforms alike are pushing towards generalizable defense against continuously evolving and sophisticated face attacks. In this dissertation, we first propose a set of defense methods that achieve state-of-the-art performance in detecting attack types within individual attack categories, both physical (e.g., face spoofs) and digital (e.g., adversarial faces and digital manipulation), then introduce a method for simultaneously safeguarding against each attack.First, in an effort to impart generalizability and interpretability to face spoof detection systems, we propose a new face anti-spoofing framework specifically designed to detect unknown spoof types, namely, Self-Supervised Regional Fully Convolutional Network (SSR-FCN), that is trained to learn local discriminative cues from a face image in a self-supervised manner. The proposed framework improves generalizability while maintaining the computational efficiency of holistic face anti-spoofing approaches (< 4 ms on a Nvidia GTX 1080Ti GPU). The proposed method is also interpretable since it localizes which parts of the face are labeled as spoofs. Experimental results show that SSR-FCN can achieve True Detection Rate (TDR) = 65% @ 2.0% False Detection Rate (FDR) when evaluated on a dataset comprising of 13 different spoof types under unknown attacks while achieving competitive performances under standard benchmark face anti-spoofing datasets (Oulu-NPU, CASIA-MFSD, and Replay-Attack).Next, we address the problem of defending against adversarial attacks. We first propose, AdvFaces, an automated adversarial face synthesis method that learns to generate minimal perturbations in the salient facial regions. Once AdvFaces is trained, it can automatically evade state-of-the-art face matchers with attack success rates as high as 97.22% and 24.30% at 0.1% FAR for obfuscation and impersonation attacks, respectively. We then propose a new self-supervised adversarial defense framework, namely FaceGuard, that can automatically detect, localize, and purify a wide variety of adversarial faces without utilizing pre-computed adversarial training samples. FaceGuard automatically synthesizes diverse adversarial faces, enabling a classifier to learn to distinguish them from bona fide faces. Concurrently, a purifier attempts to remove the adversarial perturbations in the image space. FaceGuard can achieve 99.81%, 98.73%, and 99.35% detection accuracies on LFW, CelebA, and FFHQ, respectively, on six unseen adversarial attack types.Finally, we take the first steps towards safeguarding AFR systems against face attacks in both physical and digital domains. We propose a new unified face attack detection framework, namely UniFAD, which automatically clusters similar attacks and employs a multi-task learning framework to learn salient features to distinguish between bona fides and coherent attack types. The proposed UniFAD can detect face attacks from 25 attack types across all 3 attack categories with TDR = 94.73% @ 0.2% FDR on a large fake face dataset, namely GrandFake. Further, UniFAD can identify whether attacks are adversarial, digitally manipulated, or contain spoof artifacts, with 97.37% classification accuracy.
Show less
- Title
- Towards Robust and Reliable Communication for Millimeter Wave Networks
- Creator
- Zarifneshat, Masoud
- Date
- 2022
- Collection
- Electronic Theses & Dissertations
- Description
-
The future generations of wireless networks benefit significantly from millimeter wave technology (mmW) with frequencies ranging from about 30 GHz to 300 GHz. Specifically, the fifth generation of wireless networks has already implemented the mmW technology and the capacity requirements defined in 6G will also benefit from the mmW spectrum. Despite the attractions of the mmW technology, the mmW spectrum has some inherent propagation properties that introduce challenges. The first is that free...
Show moreThe future generations of wireless networks benefit significantly from millimeter wave technology (mmW) with frequencies ranging from about 30 GHz to 300 GHz. Specifically, the fifth generation of wireless networks has already implemented the mmW technology and the capacity requirements defined in 6G will also benefit from the mmW spectrum. Despite the attractions of the mmW technology, the mmW spectrum has some inherent propagation properties that introduce challenges. The first is that free space pathloss in mmW is more severe than that in the sub 6 GHz band. To make the mmW signal travel farther, communication systems need to use phased array antennas to concentrate the signal power to a limited direction in space at each given time. Directional communication can incur high overhead on the system because it needs to probe the space for finding signal paths. To have efficient communication in the mmW spectrum, the transmitter and the receiver should align their beams on strong signal paths which is a high overhead task. The second is a low diffraction of the mmW spectrum. The low diffraction causes almost any object including the human body to easily block the mmW signal degrading the mmW link quality. Avoiding and recovering from the blockage in the mmW communications, especially in dynamic environments, is particularly challenging because of the fast changes of the mmW channel. Due to the unique characteristics of the mmW propagation, the traditional user association methods perform poorly in the mmW spectrum. Therefore, we propose user association methods that consider the inherent propagation characteristics of the mmW signal. We first propose a method that collects the history of blockage incidents throughout the network and exploits the historical blockage incidents to associate user equipment to the base station with lower blockage possibility. The simulation results show that our proposed algorithm performs better in terms of improving the quality of the links and blockage rate in the network. User association based only on one objective may deteriorate other objectives. Therefore, we formulate a biobjective optimization problem to consider two objectives of load balance and blockage possibility in the network. We conduct Lagrangian dual analysis to decrease time complexity. The results show that our solution to the biobjective optimization problem has a better outcome compared to optimizing each objective alone. After we investigate the user association problem, we further look into the problem of maintaining a robust link between a transmitter and a receiver. The directional propagation of the mmW signal creates the opportunity to exploit multipath for a robust link. The main reasons for the link quality degradation are blockage and link movement. We devise a learning-based prediction framework to classify link blockage and link movement efficiently and quickly using diffraction values for taking appropriate mitigating actions. The simulations show that the prediction framework can predict blockage with close to 90% accuracy. The prediction framework will eliminate the need for time-consuming methods to discriminate between link movement and link blockage. After detecting the reason for the link degradation, the system needs to do the beam alignment on the updated mmW signal paths. The beam alignment on the signal paths is a high overhead task. We propose using signaling in another frequency band to discover the paths surrounding a receiver working in the mmW spectrum. In this way, the receiver does not have to do an expensive beam scan in the mmW band. Our experiments with off-the-shelf devices show that we can use a non-mmW frequency band's paths to align the beams in mmW frequency. In this dissertation, we provide solutions to the fundamental problems in mmW communication. We propose a user association method that is designed for mmW networks considering challenges of mmW signal. A closed-form solution for a biobjective optimization problem to optimize both blockage and load balance of the network is also provided. Moreover, we show that we can efficiently use the out-of-band signal to exploit multipath created in mmW communication. The future research direction includes investigating the methods proposed in this dissertation to solve some of the classic problems in the wireless networks that exist in the mmW spectrum.
Show less
- Title
- Towards Accurate Ranging and Versatile Authentication for Smart Mobile Devices
- Creator
- Li, Lingkun
- Date
- 2022
- Collection
- Electronic Theses & Dissertations
- Description
-
Internet of Things (IoTs) was rapidly developed during past years. Smart devices, such as smartphones, smartwatches, and smart assistants, which are equipped with smart chips as well as sensors, provide users with many easy used functions and lead them to a more convenient life. In this dissertation, we carefully studied the birefringence of the transparent tape, the nonlinear effects of the microphone, and the phase characteristic of the reflected ultrasound, and make use of such effects to...
Show moreInternet of Things (IoTs) was rapidly developed during past years. Smart devices, such as smartphones, smartwatches, and smart assistants, which are equipped with smart chips as well as sensors, provide users with many easy used functions and lead them to a more convenient life. In this dissertation, we carefully studied the birefringence of the transparent tape, the nonlinear effects of the microphone, and the phase characteristic of the reflected ultrasound, and make use of such effects to design three systems, RainbowLight, Patronus, and BreathPass, to provide users with accurate localization, privacy protection, and authentication, respectively.RainbowLight leverages observation direction-varied spectrum generated by a polarized light passing through a birefringence material, i.e., transparent tape, to provide localization service. We characterize the relationship between observe direction, light interference and the special spectrum, and using it to calculate the direction to a chip after taking a photo containing the chip. With multiple chips, RainbowLight designs a direction intersection based method to derive the location. In this dissertation, we build the theoretical basis of using polarized light and birefringence phenomenon to perform localization. Based on the theoretical model, we design and implement the RainbowLight on the mobile device, and evaluate the performance of the system. The evaluation results show that RainbowLight achieves 1.68 cm of the median error in the X-axis, 2 cm of the median error in the Y-axis, 5.74 cm of the median error in Z-axis, and 7.04 cm of the median error with the whole dimension.It is the first system that could only use the reflected lights in the space to perform visible light positioning. Patronus prevents unauthorized speech recording by leveraging the nonlinear effects of commercial off-the-shelf microphones. The inaudible ultrasound scramble interferes recording of unauthorized devices and can be canceled on authorized devices through an adaptive filter. In this dissertation, we carefully studied the nonlinear effects of ultrasound on commercial microphones. Based on the study, we proposed an optimized configuration to generate the scramble. It would provide privacy protection againist unauthorized recordings that does not disturb normal conversations. We designed, implemented a system including hardware and software components. Experiments results show that only 19.7% of words protected by Patronus' scramble can be recognized by unauthorized devices. Furthermore, authorized recordings have 1.6x higher perceptual evaluation of speech quality (PESQ) score and, on average, 50% lower speech recognition error rates than unauthorized recordings. BreathPass uses speakers to emit ultrasound signals. The signals are reflected off the chest wall and abdomen and then back to the microphone, which records the reflected signals. The system then extracts the fingerprints from the breathing pattern, and use these fingerprints to perform authentication. In this dissertation, we characterized the challenge of conducting authentication with the breathing pattern. After addressing these challenges, we designed such a system and implemented a proof-of-concept application on Android platform.We also conducted comprehensive experiments to evaluate the performance under different scenarios. BreathPass achieves an overall accuracy of 83%, a true positive rate of 73%, and a false positive rate of 5%, according to performance evaluation results. In general, this dissertation provides an enhanced ranging and versatile authentication systems of Internet of Things.
Show less
- Title
- The Evolutionary Origins of Cognition : Understanding the early evolution of biological control systems and general intelligence
- Creator
- Carvalho Pontes, Anselmo
- Date
- 2021
- Collection
- Electronic Theses & Dissertations
- Description
-
In the last century, we have made great strides towards understanding natural cognition and recreating it artificially. However, most cognitive research is still guided by an inadequate theoretical framework that equates cognition to a computer system executing a data processing task. Cognition, whether natural or artificial, is not a data processing system; it is a control system.At cognition's core is a value system that allows it to evaluate current conditions and decide among two or more...
Show moreIn the last century, we have made great strides towards understanding natural cognition and recreating it artificially. However, most cognitive research is still guided by an inadequate theoretical framework that equates cognition to a computer system executing a data processing task. Cognition, whether natural or artificial, is not a data processing system; it is a control system.At cognition's core is a value system that allows it to evaluate current conditions and decide among two or more courses of action. Memory, learning, planning, and deliberation, rather than being essential cognitive abilities, are features that evolved over time to support the primary task of deciding “what to do next”. I used digital evolution to recreate the early stages in the evolution of natural cognition, including the ability to learn. Interestingly, I found cognition evolves in a predictable manner, with more complex abilities evolving in stages, by building upon previous simpler ones. I initially investigated the evolution of dynamic foraging behaviors among the first animals known to have a central nervous system, Ediacaran microbial mat miners. I then followed this up by evolving more complex forms of learning. I soon encountered practical limitations of the current methods, including exponential demand of computational resources and genetic representations that were not conducive to further scaling. This type of complexity barrier has been a recurrent issue in digital evolution. Nature, however, is not limited in the same ways; through evolution, it has created a language to express robust, modular, and flexible control systems of arbitrary complexity and apparently open-ended evolvability. The essential features of this language can be captured in a digital evolution platform. As an early demonstration of this, I evolved biologically plausible regulatory systems for virtual cyanobacteria. These systems regulate the cells' growth, photosynthesis and replication given the daily light cycle, the cell's energy reserves, and levels of stress. Although simple, this experimental system displays dynamics and decision-making mechanisms akin to biology, with promising potential for open-ended evolution of cognition towards general intelligence.
Show less
- Title
- The Evolution of Fundamental Neural Circuits for Cognition in Silico
- Creator
- Tehrani-Saleh, Ali
- Date
- 2021
- Collection
- Electronic Theses & Dissertations
- Description
-
Despite decades of research on intelligence and fundamental components of cognition, we still know very little about the structure and functionality of nervous systems. Questions in cognition and intelligent behavior are addressed by scientists in the fields of behavioral biology, neuroscience, psychology, and computer science. Yet it is difficult to reverse engineer observed sophisticated intelligent behaviors in animals and even more difficult to understand their underlying mechanisms.In...
Show moreDespite decades of research on intelligence and fundamental components of cognition, we still know very little about the structure and functionality of nervous systems. Questions in cognition and intelligent behavior are addressed by scientists in the fields of behavioral biology, neuroscience, psychology, and computer science. Yet it is difficult to reverse engineer observed sophisticated intelligent behaviors in animals and even more difficult to understand their underlying mechanisms.In this dissertation, I use a recently-developed neuroevolution platform -called Markov brain networks- in which Darwinian selection is used to evolve both structure and functionality of digital brains. I use this platform to study some of the most fundamental cognitive neural circuits: 1) visual motion detection, 2) collision-avoidance based on visual motion cues, 3) sound localization, and 4) time perception. In particular, I investigate both the selective pressures and environmental conditions in the evolution of these cognitive components, as well as the circuitry and computations behind them. This dissertation lays the groundwork for an evolutionary agent-based method to study the neural circuits for cognition in silico.
Show less
- Title
- TEACHERS IN SOCIAL MEDIA : A DATA SCIENCE PERSPECTIVE
- Creator
- Karimi, Hamid
- Date
- 2021
- Collection
- Electronic Theses & Dissertations
- Description
-
Social media has become an integral part of human life in the 21st century. The number of social media users was estimated to be around 3.6 billion individuals in 2020. Social media platforms (e.g., Facebook) have facilitated interpersonal communication, diffusion of information, the creation of groups and communities, to name a few. As far as education systems are concerned, online social media has transformed and connected traditional social networks within the schoolhouse to a broader and...
Show moreSocial media has become an integral part of human life in the 21st century. The number of social media users was estimated to be around 3.6 billion individuals in 2020. Social media platforms (e.g., Facebook) have facilitated interpersonal communication, diffusion of information, the creation of groups and communities, to name a few. As far as education systems are concerned, online social media has transformed and connected traditional social networks within the schoolhouse to a broader and expanded world outside. In such an expanded virtual space, teachers engage in various activities within their communities, e.g., exchanging instructional resources, seeking new teaching methods, engaging in online discussions. Therefore, given the importance of teachers in social media and its tremendous impact on PK-12 education, in this dissertation, we investigate teachers in social media from a data science perspective. Our investigation in this direction is essentially an interdisciplinary endeavor bridging modern data science and education. In particular, we have made three contributions, as briefly discussed in the following. Current teachers in social media studies suffice to a small number of surveyed teachers while thousands of other teachers are on social media. This hinders us from conducting large-scale data-driven studies pertinent to teachers in social media. Aiming to overcome this challenge and further facilitate data-driven studies related to teachers in social media, we propose a novel method that automatically identifies teachers on Pinterest, an image-based social media popular among teachers. In this framework, we formulate the teacher identification problem as a positive unlabelled (PU) learning where positive samples are surveyed teachers, and unlabelled samples are their online friends. Using our framework, we build the largest dataset of teachers on Pinterest. With this dataset at our disposal, we perform an exploratory analysis of teachers on Pinterest while considering their genders. Our analysis incorporates two crucial aspects of teachers in social media. First, we investigate various online activities of male and female teachers, e.g., topics and sources of their curated resources, the professional language employed to describe their resources. Second, we investigate male and female teachers in the context of the social network (the graph) they belong to, e.g., structural centrality, gender homophily. Our analysis and findings in this part of the dissertation can serve as a valuable reference for many entities concerned with teachers' gender, e.g., principals, state, and federal governments.Finally, in the third part of the dissertation, we shed light on the diffusion of teacher-curated resources on Pinterest. First, we introduce three measures to characterize the diffusion process. Then, we investigate these three measures while considering two crucial characteristics of a resource, e.g., the topic and the source. Ultimately, we investigate how teacher attributes (e.g., the number of friends) affect the diffusion of their resources. The conducted diffusion analysis is the first of its kind and offers a deeper understating of the complex mechanism driving the diffusion of resources curated by teachers on Pinterest.
Show less
- Title
- Some contributions to semi-supervised learning
- Creator
- Mallapragada, Paven Kumar
- Date
- 2010
- Collection
- Electronic Theses & Dissertations
- Title
- Solving Computationally Expensive Problems Using Surrogate-Assisted Optimization : Methods and Applications
- Creator
- Blank, Julian
- Date
- 2022
- Collection
- Electronic Theses & Dissertations
- Description
-
Optimization is omnipresent in many research areas and has become a critical component across industries. However, while researchers often focus on a theoretical analysis or convergence proof of an optimization algorithm, practitioners face various other challenges in real-world applications. This thesis focuses on one of the biggest challenges when applying optimization in practice: computational expense, often caused by the necessity of calling a third-party software package. To address the...
Show moreOptimization is omnipresent in many research areas and has become a critical component across industries. However, while researchers often focus on a theoretical analysis or convergence proof of an optimization algorithm, practitioners face various other challenges in real-world applications. This thesis focuses on one of the biggest challenges when applying optimization in practice: computational expense, often caused by the necessity of calling a third-party software package. To address the time-consuming evaluation, we propose a generalizable probabilistic surrogate-assisted framework that dynamically incorporates predictions of approximation models. Besides the framework's capability of handling multiple objectives and constraints simultaneously, the novelty is its applicability to all kinds of metaheuristics. Moreover, often multiple disciplines are involved in optimization, resulting in different types of software packages utilized for performance assessment. Therefore, the resulting optimization problem typically consists of multiple independently evaluable objectives and constraints with varying computational expenses. Besides providing a taxonomy describing different ways of independent evaluation calls, this thesis also proposes a methodology to handle inexpensive constraints with expensive objective functions and a more generic concept for any type of heterogeneously expensive optimization problem. Furthermore, two case studies of real-world optimization problems from the automobile industry are discussed, a blueprint for solving optimization problems in practice is provided, and a widely-used optimization framework focusing on multi-objective optimization (founded and maintained by the author of this thesis) is presented. Altogether, this thesis shall pave the way to solve (computationally expensive) real-world optimization more efficiently and bridge the gap between theory and practice.
Show less
- Title
- Semi=supervised learning with side information : graph-based approaches
- Creator
- Liu, Yi
- Date
- 2007
- Collection
- Electronic Theses & Dissertations
- Title
- SIGN LANGUAGE RECOGNIZER FRAMEWORK BASED ON DEEP LEARNING ALGORITHMS
- Creator
- Akandeh, Atra
- Date
- 2021
- Collection
- Electronic Theses & Dissertations
- Description
-
According to the World Health Organization (WHO, 2017), 5% of the world’s population have hearing loss. Most people with hearing disabilities communicate via sign language, which hearing people find extremely difficult to understand. To facilitate communication of deaf and hard of hearing people, developing an efficient communication system is a necessity. There are many challenges associated with the Sign Language Recognition (SLR) task, namely, lighting conditions, complex background,...
Show moreAccording to the World Health Organization (WHO, 2017), 5% of the world’s population have hearing loss. Most people with hearing disabilities communicate via sign language, which hearing people find extremely difficult to understand. To facilitate communication of deaf and hard of hearing people, developing an efficient communication system is a necessity. There are many challenges associated with the Sign Language Recognition (SLR) task, namely, lighting conditions, complex background, signee body postures, camera position, occlusion, complexity and large variations in hand posture, no word alignment, coarticulation, etc.Sign Language Recognition has been an active domain of research since the early 90s. However, due to computational resources and sensing technology constraints, limited advancement has been achieved over the years. Existing sign language translation systems mostly can translate a single sign at a time, which makes them less effective in daily-life interaction. This work develops a novel sign language recognition framework using deep neural networks, which directly maps videos of sign language sentences to sequences of gloss labels by emphasizing critical characteristics of the signs and injecting domain-specific expert knowledge into the system. The proposed model also allows for combining data from variant sources and hence combating limited data resources in the SLR field.
Show less
- Title
- Robust Learning of Deep Neural Networks under Data Corruption
- Creator
- Liu, Boyang
- Date
- 2022
- Collection
- Electronic Theses & Dissertations
- Description
-
Training deep neural networks in the presence of corrupted data is challenging as the corrupted data points may significantly impact generalization performance of the models. Unfortunately, the data corruption issue widely exists in many application domains, including but not limited to, healthcare, environmental sciences, autonomous driving, and social media analytics. Although there have been some previous studies that aim to enhance the robustness of machine learning models against data...
Show moreTraining deep neural networks in the presence of corrupted data is challenging as the corrupted data points may significantly impact generalization performance of the models. Unfortunately, the data corruption issue widely exists in many application domains, including but not limited to, healthcare, environmental sciences, autonomous driving, and social media analytics. Although there have been some previous studies that aim to enhance the robustness of machine learning models against data corruption, most of them either lack theoretical robustness guarantees or unable to scale to the millions of model parameters governing deep neural networks. The goal of this thesis is to design robust machine learning algorithms that 1) effectively deal with different types of data corruption, 2) have sound theoretical guarantees on robustness, and 3) scalable to large number of parameters in deep neural networks.There are two general approaches to enhance model robustness against data corruption. The first approach is to detect and remove the corrupted data while the second approach is to design robust learning algorithms that can tolerate some fraction of corrupted data. In this thesis, I had developed two robust unsupervised anomaly detection algorithms and two robust supervised learning algorithm for corrupted supervision and backdoor attack. Specifically, in Chapter 2, I proposed the Robust Collaborative Autoencoder (RCA) approach to enhance the robustness of vanilla autoencoder methods against natural corruption. In Chapter 3, I developed Robust RealNVP, a robust density estimation technique for unsupervised anomaly detection tasks given concentrated anomalies. Chapter 4 presents the Provable Robust Learning (PRL) approach, which is a robust algorithm against agnostic corrupted supervision. In Chapter 5, a meta-algorithm to defend against backdoor attacks is proposed by exploring the connection between label corruption and backdoor data poisoning attack. Extensive experiments on multiple benchmark datasets have demonstrated the robustness of the proposed algorithms under different types of corruption.
Show less
- Title
- Replaying Life's Virtual Tape : Examining the Role of History in Experiments with Digital Organisms
- Creator
- Bundy, Jason Nyerere
- Date
- 2021
- Collection
- Electronic Theses & Dissertations
- Description
-
Evolution is a complex process with a simple recipe. Evolutionary change involves three essential “ingredients” interacting over many generations: adaptation (selection), chance (random variation), and history (inheritance). In 1989’s Wonderful Life, the late paleontologist Stephen Jay Gould advocated for the importance of historical contingency—the way unique events throughout history influence future possibilities—using a clever thought experiment of “replaying life’s tape”. But not...
Show moreEvolution is a complex process with a simple recipe. Evolutionary change involves three essential “ingredients” interacting over many generations: adaptation (selection), chance (random variation), and history (inheritance). In 1989’s Wonderful Life, the late paleontologist Stephen Jay Gould advocated for the importance of historical contingency—the way unique events throughout history influence future possibilities—using a clever thought experiment of “replaying life’s tape”. But not everyone was convinced. Some believed that chance was the primary driver of evolutionary change, while others insisted that natural selection was the most powerful influence. Since then, “replaying life’s tape” has become a core method in experimental evolution for measuring the relative contributions of adaptation, chance, and history. In this dissertation, I focus on the effects associated with history in evolving populations of digital organisms—computer programs that self-replicate, mutate, compete, and evolve in virtual environments. In Chapter 1, I discuss the philosophical significance of Gould’s thought experiment and its influence on experimental methods. I argue that his thought experiment was a challenge to anthropocentric reasoning about natural history that is still popular, particularly outside of the scientific community. In this regard, it was his way of advocating for a “radical” view of evolution. In Chapter 2—Richard Lenski, Charles Ofria, and I describe a two-phase, virtual, “long-term” evolution experiment with digital organisms using the Avida software. In Phase I, we evolved 10 replicate populations, in parallel, from a single genotype for around 65,000 generations. This part of the experiment is similar to the design of Lenski’s E. coli Long-term Evolution Experiment (LTEE). We isolated the dominant genotype from each population around 3,000 generations (shallow history) into Phase I and then again at the end of Phase I (deep history). In Phase II, we evolved 10 populations from each of the genotypes we isolated from Phase I in two new environments, one similar and one dissimilar to the old environment used for Phase I. Following Phase II, we estimated the contributions of adaptation, chance, and history to the evolution of fitness and genome length in each new environment. This unique experimental design allowed us to see how the contributions of adaptation, chance, and history changed as we extended the depth of history from Phase I. We were also able to determine whether the results depended on the extent of environmental change (similar or dissimilar new environment). In Chapter 3, we report an extended analysis of the experiment from the previous chapter to further examine how extensive adaptation to the Phase I environment shaped the evolution of replicates during Phase II. We show how the form of pleiotropy (antagonistic or synergistic) between the old (Phase I) and new (Phase II) habitats was influenced by the depth of history from Phase I (shallow or deep) and the extent of environmental change (similar or dissimilar new environment). In the final chapter Zachary Blount, Richard Lenski, and I describe an exercise we developed using the educational version of Avida (Avida-ED). The exercise features a two-phase, “replaying life’s tape” activity. Students are able to explore how the unique history of founders that we pre-evolved during Phase I influences the acquisition of new functions by descendent populations during Phase II, which the students perform during the activity.
Show less
- Title
- Predicting the Properties of Ligands Using Molecular Dynamics and Machine Learning
- Creator
- Donyapour, Nazanin
- Date
- 2022
- Collection
- Electronic Theses & Dissertations
- Description
-
The discovery and design of new drugs requires extensive experimental assays that are usually very expensive and time-consuming. To cut down the cost and time of the drug development process and help design effective drugs more efficiently, various computational methods have been developed that are referred to collectively as in silico drug design. These in silico methods can be used to not only determine compounds that can bind to a target receptor but to determine whether compounds show...
Show moreThe discovery and design of new drugs requires extensive experimental assays that are usually very expensive and time-consuming. To cut down the cost and time of the drug development process and help design effective drugs more efficiently, various computational methods have been developed that are referred to collectively as in silico drug design. These in silico methods can be used to not only determine compounds that can bind to a target receptor but to determine whether compounds show ideal drug-like properties. I have provided solutions to these problems by developing novel methods for molecular simulation and molecular property prediction. Firstly, we have developed a new enhanced sampling MD algorithm called Resampling of Ensembles by Variation Optimization or “REVO” that can generate binding and unbinding pathways of ligand-target interactions. These pathways are useful for calculating transition rates and Residence Times (RT) of protein-ligand complexes. This can be particularly useful for drug design as studies for some systems show that the drug efficacy correlates more with RT than the binding affinity. This method is generally useful for generating long-timescale transitions in complex systems, including alternate ligand binding poses and protein conformational changes. Secondly, we have developed a technique we refer to as “ClassicalGSG” to predict the partition coefficient (log P) of small molecules. log P is one of the main factors in determining the drug likeness of a compound, as it helps determine bioavailability, solubility, and membrane permeability. This method has been very successful compared to other methods in literature. Finally, we have developed a method called ``Flexible Topology'' that we hope can eventually be used to screen a database of potential ligands while considering ligand-induced conformational changes. After discovering molecules with drug-like properties in the drug design pipeline, Virtual Screening (VS) methods are employed to perform an extensive search on drug databases with hundreds of millions of compounds to find candidates that bind tightly to a molecular target. However, in order for this to be computationally tractable, typically, only static snapshots of the target are used, which cannot respond to the presence of the drug compound. To efficiently capture drug-target interactions during screening, we have developed a machine-learning algorithm that employs Molecular Dynamics (MD) simulations with a protein of interest and a set of atoms called “Ghost Particles”. During the simulation, the Flexible Topology method induces forces that constantly modify the ghost particles and optimizes them toward drug-like molecules that are compatible with the molecular target.
Show less
- Title
- Optimal Learning of Deployment and Search Strategies for Robotic Teams
- Creator
- Wei, Lai
- Date
- 2021
- Collection
- Electronic Theses & Dissertations
- Description
-
In the problem of optimal learning, the dilemma of exploration and exploitation stems from the fact that gathering information and exploiting it are, in many cases, two mutually exclusive activities. The key to optimal learning is to strike a balance between exploration and exploitation. The Multi-Armed Bandit (MAB) problem is a prototypical example of such an explore-exploit tradeoff, in which a decision-maker sequentially allocates a single resource by repeatedly choosing one among a set of...
Show moreIn the problem of optimal learning, the dilemma of exploration and exploitation stems from the fact that gathering information and exploiting it are, in many cases, two mutually exclusive activities. The key to optimal learning is to strike a balance between exploration and exploitation. The Multi-Armed Bandit (MAB) problem is a prototypical example of such an explore-exploit tradeoff, in which a decision-maker sequentially allocates a single resource by repeatedly choosing one among a set of options that provide stochastic rewards. The MAB setup has been applied in many robotics problems such as foraging, surveillance, and target search, wherein the task of robots can be modeled as collecting stochastic rewards. The theoretical work of this dissertation is based on the MAB setup and three problem variations, namely heavy-tailed bandits, nonstationary bandits, and multi-player bandits, are studied. The first two variations capture two key features of stochastic feedback in complex and uncertain environments: heavy-tailed distributions and nonstationarity; while the last one addresses the problem of achieving coordination in uncertain environments. We design several algorithms that are robust to heavy-tailed distributions and nonstationary environments. Besides, two distributed policies that require no communication among agents are designed for the multi-player stochastic bandits in a piece-wise stationary environment.The MAB problems provide a natural framework to study robotic search problems. The above variations of the MAB problems directly map to robotic search tasks in which a robot team searches for a target from a fixed set of view-points (arms). We further focus on the class of search problems involving the search of an unknown number of targets in a large or continuous space. We view the multi-target search problem as a hot-spots identification problem in which, instead of the global maximum of the field, all locations with a value greater than a threshold need to be identified. We consider a robot moving in 3D space with a downward-facing camera sensor. We model the robot's sensing output using a multi-fidelity Gaussian Process (GP) that systematically describes the sensing information available at different altitudes from the floor. Based on the sensing model, we design a novel algorithm that (i) addresses the coverage-accuracy tradeoff: sampling at a location farther from the floor provides a wider field of view but less accurate measurements, (ii) computes an occupancy map of the floor within a prescribed accuracy and quickly eliminates unoccupied regions from the search space, and (iii) travels efficiently to collect the required samples for target detection. We rigorously analyze the algorithm and establish formal guarantees on the target detection accuracy and the detection time.An approach to extend the single robot search policy to multiple robots is to partition the environment into multiple regions such that workload is equitably distributed among all regions and then assign a robot to each region. The coverage control focuses on such equitable partitioning and the workload is equivalent to the so-called service demands in the coverage control literature. In particular, we study the adaptive coverage control problem, in which the demands of robotic service within the environment are modeled as a GP. To optimize the coverage of service demands in the environment, the team of robots aims to partition the environment and achieve a configuration that minimizes the coverage cost, which is a measure of the average distance of a service demand from the nearest robot. The robots need to address the explore-exploit tradeoff: to minimize coverage cost, they need to gather information about demands within the environment, whereas information gathering deviates them from maintaining a good coverage configuration. We propose an algorithm that schedules learning and coverage epochs such that its emphasis gradually shifts from exploration to exploitation while never fully ceasing to learn. Using a novel definition of coverage regret, we analyze the algorithm and characterizes its coverage performance over a finite time horizon.
Show less
- Title
- Online Learning Algorithms for Mining Trajectory data and their Applications
- Creator
- Wang, Ding
- Date
- 2021
- Collection
- Electronic Theses & Dissertations
- Description
-
Trajectories are spatio-temporal data that represent traces of moving objects, such as humans, migrating animals, vehicles, and tropical cyclones. In addition to the geo-location information, a trajectory data often contain other (non-spatial) features describing the states of the moving objects. The time-varying geo-location and state information would collectively characterize a trajectory dataset, which can be harnessed to understand the dynamics of the moving objects. This thesis focuses...
Show moreTrajectories are spatio-temporal data that represent traces of moving objects, such as humans, migrating animals, vehicles, and tropical cyclones. In addition to the geo-location information, a trajectory data often contain other (non-spatial) features describing the states of the moving objects. The time-varying geo-location and state information would collectively characterize a trajectory dataset, which can be harnessed to understand the dynamics of the moving objects. This thesis focuses on the development of efficient and accurate machine learning algorithms for forecasting the future trajectory path and state of a moving object. Although many methods have been developed in recent years, there are still numerous challenges that have not been sufficiently addressed by existing methods, which hamper their effectiveness when applied to critical applications such as hurricane prediction. These challenges include their difficulties in terms of handling concept drifts, error propagation in long-term forecasts, missing values, and nonlinearities in the data. In this thesis, I present a family of online learning algorithms to address these challenges. Online learning is an effective approach as it can efficiently fit new observations while adapting to concept drifts present in the data. First, I proposed an online learning framework called OMuLeT for long-term forecasting of the trajectory paths of moving objects. OMuLeT employs an online learning with restart strategy to incrementally update the weights of its predictive model as new observation data become available. It can also handle missing values in the data using a novel weight renormalization strategy.Second, I introduced the OOR framework to predict the future state of the moving object. Since the state can be represented by ordinal values, OOR employs a novel ordinal loss function to train its model. In addition, the framework was extended to OOQR to accommodate a quantile loss function to improve its prediction accuracy for larger values on the ordinal scale. Furthermore, I also developed the OOR-ε and OOQR-ε frameworks to generate real-valued state predictions using the ε insensitivity loss function.Third, I developed an online learning framework called JOHAN, that simultaneously predicts the location and state of the moving object. JOHAN generates its predictions by leveraging the relationship between the state and location information. JOHAN utilizes a quantile loss function to bias the algorithm towards predicting more accurately large categorical values in terms of the state of the moving object, say, for a high intensity hurricane.Finally, I present a deep learning framework to capture non-linear relationships in trajectory data. The proposed DTP framework employs a TDM approach for imputing missing values, coupled with an LSTM architecture for dynamic path prediction. In addition, the framework was extended to ODTP, which applied an online learning setting to address concept drifts present in the trajectory data.As proof of concept, the proposed algorithms were applied to the hurricane prediction task. Both OMuLeT and ODTP were used to predict the future trajectory path of a hurricane up to 48 hours lead time. Experimental results showed that OMuLeT and ODTP outperformed various baseline methods, including the official forecasts produced by the U.S. National Hurricane Center. OOR was applied to predict the intensity of a hurricane up to 48 hours in advance. Experimental results showed that OOR outperformed various state-of-the-art online learning methods and can generate predictions close to the NHC official forecasts. Since hurricane intensity prediction is a notoriously hard problem, JOHAN was applied to improve its prediction accuracy by leveraging the trajectory information, particularly for high intensity hurricanes that are near landfall.
Show less
- Title
- OPTIMIZATION OF LARGE SCALE ITERATIVE EIGENSOLVERS
- Creator
- Afibuzzaman, Md
- Date
- 2021
- Collection
- Electronic Theses & Dissertations
- Description
-
Sparse matrix computations, in the form of solvers for systems of linear equations, eigenvalue problem or matrix factorizations constitute the main kernel in problems from fields as diverse as computational fluid dynamics, quantum many body problems, machine learning and graph analytics. Iterative eigensolvers have been preferred over the regular method because the regular method not being feasible with industrial sized matrices. Although dense linear algebra libraries like BLAS, LAPACK,...
Show moreSparse matrix computations, in the form of solvers for systems of linear equations, eigenvalue problem or matrix factorizations constitute the main kernel in problems from fields as diverse as computational fluid dynamics, quantum many body problems, machine learning and graph analytics. Iterative eigensolvers have been preferred over the regular method because the regular method not being feasible with industrial sized matrices. Although dense linear algebra libraries like BLAS, LAPACK, SCALAPACK are well established and some vendor optimized implementation like mkl from Intel or Cray Libsci exist, it is not the same case for sparse linear algebra which is lagging far behind. The main reason behind slow progress in the standardization of sparse linear algebra or library development is the different forms and properties depending on the application area. It is worsened for deep memory hierarchies of modern architectures due to low arithmetic intensities and memory bound computations. Minimization of data movement and fast access to the matrix are critical in this case. Since the current technology is driven by deep memory architectures where we get the increased capacity at the expense of increased latency and decreased bandwidth when we go further from the processors. The key to achieve high performance in sparse matrix computations in deep memory hierarchy is to minimize data movement across layers of the memory and overlap data movement with computations. My thesis work contributes towards addressing the algorithmic challenges and developing a computational infrastructure to achieve high performance in scientific applications for both shared memory and distributed memory architectures. For this purpose, I started working on optimizing a blocked eigensolver and optimized specific computational kernels which uses a new storage format. Using this optimization as a building block, we introduce a shared memory task parallel framework focusing on optimizing the entire solvers rather than a specific kernel. Before extending this shared memory implementation to a distributed memory architecture, I simulated the communication pattern and overheads of a large scale distributed memory application and then I introduce the communication tasks in the framework to overlap communication and computation. Additionally, I also tried to find a custom scheduler for the tasks using a graph partitioner. To get acquainted with high performance computing and parallel libraries, I started my PhD journey with optimizing a DFT code named Sky3D where I used dense matrix libraries. Despite there might not be any single solution for this problem, I tried to find an optimized solution. Though the large distributed memory application MFDn is kind of the driver project of the thesis, but the framework we developed is not confined to MFDn only, rather it can be used for other scientific applications too. The output of this thesis is the task parallel HPC infrastructure that we envisioned for both shared and distributed memory architectures.
Show less
- Title
- Non-coding RNA identification in large-scale genomic data
- Creator
- Yuan, Cheng
- Date
- 2014
- Collection
- Electronic Theses & Dissertations
- Description
-
Noncoding RNAs (ncRNAs), which function directly as RNAs without translating into proteins, play diverse and important biological functions. ncRNAs function not only through their primary structures, but also secondary structures, which are defined by interactions between Watson-Crick and wobble base pairs. Common types of ncRNA include microRNA, rRNA, snoRNA, tRNA. Functions of ncRNAs vary among different types. Recent studies suggest the existence of large number of ncRNA genes....
Show moreNoncoding RNAs (ncRNAs), which function directly as RNAs without translating into proteins, play diverse and important biological functions. ncRNAs function not only through their primary structures, but also secondary structures, which are defined by interactions between Watson-Crick and wobble base pairs. Common types of ncRNA include microRNA, rRNA, snoRNA, tRNA. Functions of ncRNAs vary among different types. Recent studies suggest the existence of large number of ncRNA genes. Identification of novel and known ncRNAs becomes increasingly important in order to understand their functionalities and the underlying communities.Next-generation sequencing (NGS) technology sheds lights on more comprehensive and sensitive ncRNA annotation. Lowly transcribed ncRNAs or ncRNAs from rare species with low abundance may be identified via deep sequencing. However, there exist several challenges in ncRNA identification in large-scale genomic data. First, the massive volume of datasets could lead to very long computation time, making existing algorithms infeasible. Second, NGS has relatively high error rate, which could further complicate the problem. Third, high sequence similarity among related ncRNAs could make them difficult to identify, resulting in incorrect output. Fourth, while secondary structures should be adopted for accurate ncRNA identification, they usually incur high computational complexity. In particular, some ncRNAs contain pseudoknot structures, which cannot be effectively modeled by the state-of-the-art approach. As a result, ncRNAs containing pseudoknots are hard to annotate.In my PhD work, I aimed to tackle the above challenges in ncRNA identification. First, I designed a progressive search pipeline to identify ncRNAs containing pseudoknot structures. The algorithms are more efficient than the state-of-the-art approaches and can be used for large-scale data. Second, I designed a ncRNA classification tool for short reads in NGS data lacking quality reference genomes. The initial homology search phase significantly reduces size of the original input, making the tool feasible for large-scale data. Last, I focused on identifying 16S ribosomal RNAs from NGS data. 16S ribosomal RNAs are very important type of ncRNAs, which can be used for phylogenic study. A set of graph based assembly algorithms were applied to form longer or full-length 16S rRNA contigs. I utilized paired-end information in NGS data, so lowly abundant 16S genes can also be identified. To reduce the complexity of problem and make the tool practical for large-scale data, I designed a list of error correction and graph reduction techniques for graph simplification.
Show less
- Title
- Network analysis with negative links
- Creator
- Derr, Tyler Scott
- Date
- 2020
- Collection
- Electronic Theses & Dissertations
- Description
-
As we rapidly continue into the information age, the rate at which data is produced has created an unprecedented demand for novel methods to effectively extract insightful patterns. We can then seek to understand the past, make predictions about the future, and ultimately take actionable steps towards improving our society. Thus, due to the fact that much of today's big data can be represented as graphs, emphasis is being taken to harness the natural structure of data through network analysis...
Show moreAs we rapidly continue into the information age, the rate at which data is produced has created an unprecedented demand for novel methods to effectively extract insightful patterns. We can then seek to understand the past, make predictions about the future, and ultimately take actionable steps towards improving our society. Thus, due to the fact that much of today's big data can be represented as graphs, emphasis is being taken to harness the natural structure of data through network analysis. Traditionally, network analysis has focused on networks having only positive links, or unsigned networks. However, in many real-world systems, relations between nodes in a graph can be both positive and negative, or signed networks. For example, in online social media, users not only have positive links such as friends, followers, and those they trust, but also can establish negative links to those they distrust, towards their foes, or block and unfriend users.Thus, although signed networks are ubiquitous due to their ability to represent negative links in addition to positive links, they have been significantly under explored. In addition, due to the rise in popularity of today's social media and increased polarization online, this has led to both an increased attention and demand for advanced methods to perform the typical network analysis tasks when also taking into consideration negative links. More specifically, there is a need for methods that can measure, model, mine, and apply signed networks that harness both these positive and negative relations. However, this raises novel challenges, as the properties and principles of negative links are not necessarily the same as positive links, and furthermore the social theories that have been used in unsigned networks might not apply with the inclusion of negative links.The chief objective of this dissertation is to first analyze the distinct properties negative links have as compared to positive links and towards improving network analysis with negative links by researching the utility and how to harness social theories that have been established in a holistic view of networks containing both positive and negative links. We discover that simply extending unsigned network analysis is typically not sufficient and that although the existence of negative links introduces numerous challenges, they also provide unprecedented opportunities for advancing the frontier of the network analysis domain. In particular, we develop advanced methods in signed networks for measuring node relevance and centrality (i.e., signed network measuring), present the first generative signed network model and extend/analyze balance theory to signed bipartite networks (i.e., signed network modeling), construct the first signed graph convolutional network which learns node representations that can achieve state-of-the-art prediction performance and then furthermore introduce the novel idea of transformation-based network embedding (i.e., signed network mining), and apply signed networks by creating a framework that can infer both link and interaction polarity levels in online social media and constructing an advanced comprehensive congressional vote prediction framework built around harnessing signed networks.
Show less
- Title
- Multiple kernel and multi-label learning for image categorization
- Creator
- Bucak, Serhat Selçuk
- Date
- 2014
- Collection
- Electronic Theses & Dissertations
- Description
-
"One crucial step towards the goal of converting large image collections to useful information sources is image categorization. The goal of image categorization is to find the relevant labels for a given an image from a closed set of labels. Despite the huge interest and significant contributions by the research community, there remains much room for improvement in the image categorization task. In this dissertation, we develop efficient multiple kernel learning and multi-label learning...
Show more"One crucial step towards the goal of converting large image collections to useful information sources is image categorization. The goal of image categorization is to find the relevant labels for a given an image from a closed set of labels. Despite the huge interest and significant contributions by the research community, there remains much room for improvement in the image categorization task. In this dissertation, we develop efficient multiple kernel learning and multi-label learning algorithms with high prediction performance for image categorization... " -- Abstract.
Show less