You are here
Search results
(61 - 79 of 79)
Pages
- Title
- SIGN LANGUAGE RECOGNIZER FRAMEWORK BASED ON DEEP LEARNING ALGORITHMS
- Creator
- Akandeh, Atra
- Date
- 2021
- Collection
- Electronic Theses & Dissertations
- Description
-
According to the World Health Organization (WHO, 2017), 5% of the world’s population have hearing loss. Most people with hearing disabilities communicate via sign language, which hearing people find extremely difficult to understand. To facilitate communication of deaf and hard of hearing people, developing an efficient communication system is a necessity. There are many challenges associated with the Sign Language Recognition (SLR) task, namely, lighting conditions, complex background,...
Show moreAccording to the World Health Organization (WHO, 2017), 5% of the world’s population have hearing loss. Most people with hearing disabilities communicate via sign language, which hearing people find extremely difficult to understand. To facilitate communication of deaf and hard of hearing people, developing an efficient communication system is a necessity. There are many challenges associated with the Sign Language Recognition (SLR) task, namely, lighting conditions, complex background, signee body postures, camera position, occlusion, complexity and large variations in hand posture, no word alignment, coarticulation, etc.Sign Language Recognition has been an active domain of research since the early 90s. However, due to computational resources and sensing technology constraints, limited advancement has been achieved over the years. Existing sign language translation systems mostly can translate a single sign at a time, which makes them less effective in daily-life interaction. This work develops a novel sign language recognition framework using deep neural networks, which directly maps videos of sign language sentences to sequences of gloss labels by emphasizing critical characteristics of the signs and injecting domain-specific expert knowledge into the system. The proposed model also allows for combining data from variant sources and hence combating limited data resources in the SLR field.
Show less
- Title
- SOCIAL MECHANISMS OF LEADERSHIP EMERGENCE : A COMPUTATIONAL EVALUATION OF LEADERSHIP NETWORK STRUCTURES
- Creator
- Griffin, Daniel Jacob
- Date
- 2020
- Collection
- Electronic Theses & Dissertations
- Description
-
Leadership emergence is a topic of immense interest in the organizational sciences. One promising recent development in the leadership literature focuses on the development and impact of informal leadership structures in a share leadership paradigm. Despite its theoretical importance, the network perspective of leadership emergence is still underdeveloped, largely due to the complexity of studying and theorizing about network-level phenomena. Using computational modeling techniques, I...
Show moreLeadership emergence is a topic of immense interest in the organizational sciences. One promising recent development in the leadership literature focuses on the development and impact of informal leadership structures in a share leadership paradigm. Despite its theoretical importance, the network perspective of leadership emergence is still underdeveloped, largely due to the complexity of studying and theorizing about network-level phenomena. Using computational modeling techniques, I evaluate the network-level implications of two existing theories that broadly represent social theories of leadership emergence. I derive formal representations for both foundational theories and expand on this theory to develop a synthesis theory describing how these two processes work in parallel. Results from simulated experiments indicate that group homogeneity is associated with vastly different leadership network structures depending on which theoretical process mechanisms are in play. This thesis contributes significantly to the literature by 1) advancing a network-based approach to leadership emergence research, 2) testing the implications of existing theory, 3) developing new theory, and 4) providing a strong foundation and tool kit for future leadership network emergence research.
Show less
- Title
- Semi-Adversarial Networks for Imparting Demographic Privacy to Face Images
- Creator
- Mirjalili, Vahid
- Date
- 2020
- Collection
- Electronic Theses & Dissertations
- Description
-
Face recognition systems are being widely used in a number of applications ranging from user authentication in hand-held devices to identifying people of interest from surveillance videos. In several such applications, face images are stored in a central database. In such cases, it is necessary to ensure that the stored face images are used for the stated purpose and not for any other purposes. For example, advanced machine learning methods can be used to automatically extract age, gender,...
Show moreFace recognition systems are being widely used in a number of applications ranging from user authentication in hand-held devices to identifying people of interest from surveillance videos. In several such applications, face images are stored in a central database. In such cases, it is necessary to ensure that the stored face images are used for the stated purpose and not for any other purposes. For example, advanced machine learning methods can be used to automatically extract age, gender, race and so on from the stored face images. These cues are often referred to as demographic attributes. When such attributes are extracted without the consent of individuals, it can lead to potential violation of privacy. Indeed, the European Union's General Data Protection and Regulation (GDPR) requires the primary purpose of data collection to be declared to individuals prior to data collection. GDPR strictly prohibits the use of this data for any purpose beyond what was stated. In this thesis, we consider this type of regulation and develop methods for enhancing the privacy accorded to face images with respect to the automatic extraction of demogrpahic attributes. In particular, we design algorithms that modify input face images such that certain specified demogrpahic attributes cannot be reliably extracted from them. At the same time, the biometric utility of the images is retained, i.e., the modified face images can still be used for matching purposes. The primary objective of this research is not necessarily to fool human observers, but rather to prevent machine learning methods from automatically extracting such information. The following are the contributions of this thesis. First, we design a convolutional autoencoder known as a semi-adversarial neural network, or SAN, that perturbs input face images such that they are adversarial with respect to an attribute classifier (e.g., gender classifier) while still retaining their utility with respect to a face matcher. Second, we develop techniques to ensure that the adversarial outputs produced by the SAN are generalizable across multiple attribute classifiers, including those that may not have been used during the training phase. Third, we extend the SAN architecture and develop a neural network known as PrivacyNet, that can be used for imparting multi-attribute privacy to face images. Fourth, we conduct extensive experimental analysis using several face image datasets to evaluate the performance of the proposed methods as well as visualize the perturbations induced by the methods. Results suggest the benefits of using semi-adversarial networks to impart privacy to face images while still retaining the biometric utility of the ensuing face images.
Show less
- Title
- Semi=supervised learning with side information : graph-based approaches
- Creator
- Liu, Yi
- Date
- 2007
- Collection
- Electronic Theses & Dissertations
- Title
- Sequence learning with side information : modeling and applications
- Creator
- Wang, Zhiwei
- Date
- 2020
- Collection
- Electronic Theses & Dissertations
- Description
-
Sequential data is ubiquitous and modeling sequential data has been one of the most long-standing computer science problems. The goal of sequence modeling is to represent a sequence with a low-dimensional dense vector that incorporates as much information as possible. A fundamental type of information contained in sequences is the sequential dependency and a large body of research has been devoted to designing effective ways to capture it. Recently, sequence learning models such as recurrent...
Show moreSequential data is ubiquitous and modeling sequential data has been one of the most long-standing computer science problems. The goal of sequence modeling is to represent a sequence with a low-dimensional dense vector that incorporates as much information as possible. A fundamental type of information contained in sequences is the sequential dependency and a large body of research has been devoted to designing effective ways to capture it. Recently, sequence learning models such as recurrent neural networks (RNNs), temporal convolutional networks, and Transformer have gained tremendous popularity in modeling sequential data. Equipped with effective structures such as gating mechanisms, large receptive fields, and attention mechanisms, these models have achieved great success in many applications of a wide range of fields.However, besides the sequential dependency, sequences also exhibit side information that remains under-explored. Thus, in the thesis, we study the problem of sequence learning with side information. Specifically, we present our efforts devoted to building sequence learning models to effectively and efficiently capture side information that is commonly seen in sequential data. In addition, we show that side information can play an important role in sequence learning tasks as it can provide rich information that is complementary to the sequential dependency. More importantly, we apply our proposed models in various real-world applications and have achieved promising results.
Show less
- Title
- Solving Computationally Expensive Problems Using Surrogate-Assisted Optimization : Methods and Applications
- Creator
- Blank, Julian
- Date
- 2022
- Collection
- Electronic Theses & Dissertations
- Description
-
Optimization is omnipresent in many research areas and has become a critical component across industries. However, while researchers often focus on a theoretical analysis or convergence proof of an optimization algorithm, practitioners face various other challenges in real-world applications. This thesis focuses on one of the biggest challenges when applying optimization in practice: computational expense, often caused by the necessity of calling a third-party software package. To address the...
Show moreOptimization is omnipresent in many research areas and has become a critical component across industries. However, while researchers often focus on a theoretical analysis or convergence proof of an optimization algorithm, practitioners face various other challenges in real-world applications. This thesis focuses on one of the biggest challenges when applying optimization in practice: computational expense, often caused by the necessity of calling a third-party software package. To address the time-consuming evaluation, we propose a generalizable probabilistic surrogate-assisted framework that dynamically incorporates predictions of approximation models. Besides the framework's capability of handling multiple objectives and constraints simultaneously, the novelty is its applicability to all kinds of metaheuristics. Moreover, often multiple disciplines are involved in optimization, resulting in different types of software packages utilized for performance assessment. Therefore, the resulting optimization problem typically consists of multiple independently evaluable objectives and constraints with varying computational expenses. Besides providing a taxonomy describing different ways of independent evaluation calls, this thesis also proposes a methodology to handle inexpensive constraints with expensive objective functions and a more generic concept for any type of heterogeneously expensive optimization problem. Furthermore, two case studies of real-world optimization problems from the automobile industry are discussed, a blueprint for solving optimization problems in practice is provided, and a widely-used optimization framework focusing on multi-objective optimization (founded and maintained by the author of this thesis) is presented. Altogether, this thesis shall pave the way to solve (computationally expensive) real-world optimization more efficiently and bridge the gap between theory and practice.
Show less
- Title
- Some contributions to semi-supervised learning
- Creator
- Mallapragada, Paven Kumar
- Date
- 2010
- Collection
- Electronic Theses & Dissertations
- Title
- Sparse Large-Scale Multi-Objective Optimization for Climate-Smart Agricultural Innovation
- Creator
- Kropp, Ian Meyer
- Date
- 2022
- Collection
- Electronic Theses & Dissertations
- Description
-
The challenge of our generation is to produce enough food to feed the present and future global population. This is no simple task, as the world population is expanding and becoming more affluent, and conventional agriculture often degrades the environment. Without a healthy and functional environment, agriculture as we know it will fail. Therefore, we must equally balance our broad goals of sustainability and food production as a single system. Multi-objective optimization, algorithms that...
Show moreThe challenge of our generation is to produce enough food to feed the present and future global population. This is no simple task, as the world population is expanding and becoming more affluent, and conventional agriculture often degrades the environment. Without a healthy and functional environment, agriculture as we know it will fail. Therefore, we must equally balance our broad goals of sustainability and food production as a single system. Multi-objective optimization, algorithms that search for solutions to complex problems that contain conflicting objectives, is an effective tool for balancing these two goals. In this dissertation, we apply multi-objective optimization to find optimal management practices for irrigating and fertilizing corn. There are two areas for improvement in multi-objective optimization of corn management: existing methods run burdensomely slow and do not account for the uncertainty of weather. Improving run-time and optimizing in the face of weather uncertainty are the two goals of this dissertation. We address these goals with four novel methodologies that advance the fields of biosystems & agricultural engineering, as well as computer science engineering. In the first study, we address the first goal by drastically improving the performance of evolutionary multi-objective algorithms for sparse large-scale optimization problems. Sparse optimization, such as irrigation and nutrient management, are problems whose optimal solutions are mostly zero. Our novel algorithm, called sparse population sampling (SPS), integrates with and improves all population-based algorithms over almost all test scenarios. SPS, when used with NSGA-II, was able to outperform the existing state-of-the-art algorithms with the most complex of sparse large-scale optimization problems (i.e., 2,500 or more decision variables). The second study addressed the second goal by optimizing common management practices in a study site in Cass County, Michigan, for all climate scenarios. This methodology, which relied on SPS from the first goal, implements the concept of innovization in agriculture. In our innovization framework, 30 years of management practices were optimized against observed weather data, which in turn was compared to common practices in Cass County, Michigan. The differences between the optimal solutions and common practices were transformed into simple recommendations for farmers to apply during future growing seasons. Our recommendations drastically increased yields under 420 validation scenarios with no impact on nitrogen leaching. The third study further improves the performance of sparse large-scale optimization. Where SPS was a single component of a population-based algorithm, our proposed method, S-NSGA-II, is a novel and complete evolutionary algorithm for sparse large-scale optimization problems. Our algorithm outperforms or performs as well as other contemporary sparse large-scale optimization algorithms, especially in problems with more than 800 decision variables. This enhanced convergence will further improve multi-objective optimization in agriculture. Our final study, which addresses the second goal, takes a different approach to optimizing agricultural systems in the face of climate uncertainty. In this study, we use stochastic weather to quantify risk in optimization. In this way, farmers can choose between optimal management decisions with full understanding of the risks involved in every management decision.
Show less
- Title
- TEACHERS IN SOCIAL MEDIA : A DATA SCIENCE PERSPECTIVE
- Creator
- Karimi, Hamid
- Date
- 2021
- Collection
- Electronic Theses & Dissertations
- Description
-
Social media has become an integral part of human life in the 21st century. The number of social media users was estimated to be around 3.6 billion individuals in 2020. Social media platforms (e.g., Facebook) have facilitated interpersonal communication, diffusion of information, the creation of groups and communities, to name a few. As far as education systems are concerned, online social media has transformed and connected traditional social networks within the schoolhouse to a broader and...
Show moreSocial media has become an integral part of human life in the 21st century. The number of social media users was estimated to be around 3.6 billion individuals in 2020. Social media platforms (e.g., Facebook) have facilitated interpersonal communication, diffusion of information, the creation of groups and communities, to name a few. As far as education systems are concerned, online social media has transformed and connected traditional social networks within the schoolhouse to a broader and expanded world outside. In such an expanded virtual space, teachers engage in various activities within their communities, e.g., exchanging instructional resources, seeking new teaching methods, engaging in online discussions. Therefore, given the importance of teachers in social media and its tremendous impact on PK-12 education, in this dissertation, we investigate teachers in social media from a data science perspective. Our investigation in this direction is essentially an interdisciplinary endeavor bridging modern data science and education. In particular, we have made three contributions, as briefly discussed in the following. Current teachers in social media studies suffice to a small number of surveyed teachers while thousands of other teachers are on social media. This hinders us from conducting large-scale data-driven studies pertinent to teachers in social media. Aiming to overcome this challenge and further facilitate data-driven studies related to teachers in social media, we propose a novel method that automatically identifies teachers on Pinterest, an image-based social media popular among teachers. In this framework, we formulate the teacher identification problem as a positive unlabelled (PU) learning where positive samples are surveyed teachers, and unlabelled samples are their online friends. Using our framework, we build the largest dataset of teachers on Pinterest. With this dataset at our disposal, we perform an exploratory analysis of teachers on Pinterest while considering their genders. Our analysis incorporates two crucial aspects of teachers in social media. First, we investigate various online activities of male and female teachers, e.g., topics and sources of their curated resources, the professional language employed to describe their resources. Second, we investigate male and female teachers in the context of the social network (the graph) they belong to, e.g., structural centrality, gender homophily. Our analysis and findings in this part of the dissertation can serve as a valuable reference for many entities concerned with teachers' gender, e.g., principals, state, and federal governments.Finally, in the third part of the dissertation, we shed light on the diffusion of teacher-curated resources on Pinterest. First, we introduce three measures to characterize the diffusion process. Then, we investigate these three measures while considering two crucial characteristics of a resource, e.g., the topic and the source. Ultimately, we investigate how teacher attributes (e.g., the number of friends) affect the diffusion of their resources. The conducted diffusion analysis is the first of its kind and offers a deeper understating of the complex mechanism driving the diffusion of resources curated by teachers on Pinterest.
Show less
- Title
- The Evolution of Fundamental Neural Circuits for Cognition in Silico
- Creator
- Tehrani-Saleh, Ali
- Date
- 2021
- Collection
- Electronic Theses & Dissertations
- Description
-
Despite decades of research on intelligence and fundamental components of cognition, we still know very little about the structure and functionality of nervous systems. Questions in cognition and intelligent behavior are addressed by scientists in the fields of behavioral biology, neuroscience, psychology, and computer science. Yet it is difficult to reverse engineer observed sophisticated intelligent behaviors in animals and even more difficult to understand their underlying mechanisms.In...
Show moreDespite decades of research on intelligence and fundamental components of cognition, we still know very little about the structure and functionality of nervous systems. Questions in cognition and intelligent behavior are addressed by scientists in the fields of behavioral biology, neuroscience, psychology, and computer science. Yet it is difficult to reverse engineer observed sophisticated intelligent behaviors in animals and even more difficult to understand their underlying mechanisms.In this dissertation, I use a recently-developed neuroevolution platform -called Markov brain networks- in which Darwinian selection is used to evolve both structure and functionality of digital brains. I use this platform to study some of the most fundamental cognitive neural circuits: 1) visual motion detection, 2) collision-avoidance based on visual motion cues, 3) sound localization, and 4) time perception. In particular, I investigate both the selective pressures and environmental conditions in the evolution of these cognitive components, as well as the circuitry and computations behind them. This dissertation lays the groundwork for an evolutionary agent-based method to study the neural circuits for cognition in silico.
Show less
- Title
- The Evolutionary Origins of Cognition : Understanding the early evolution of biological control systems and general intelligence
- Creator
- Carvalho Pontes, Anselmo
- Date
- 2021
- Collection
- Electronic Theses & Dissertations
- Description
-
In the last century, we have made great strides towards understanding natural cognition and recreating it artificially. However, most cognitive research is still guided by an inadequate theoretical framework that equates cognition to a computer system executing a data processing task. Cognition, whether natural or artificial, is not a data processing system; it is a control system.At cognition's core is a value system that allows it to evaluate current conditions and decide among two or more...
Show moreIn the last century, we have made great strides towards understanding natural cognition and recreating it artificially. However, most cognitive research is still guided by an inadequate theoretical framework that equates cognition to a computer system executing a data processing task. Cognition, whether natural or artificial, is not a data processing system; it is a control system.At cognition's core is a value system that allows it to evaluate current conditions and decide among two or more courses of action. Memory, learning, planning, and deliberation, rather than being essential cognitive abilities, are features that evolved over time to support the primary task of deciding “what to do next”. I used digital evolution to recreate the early stages in the evolution of natural cognition, including the ability to learn. Interestingly, I found cognition evolves in a predictable manner, with more complex abilities evolving in stages, by building upon previous simpler ones. I initially investigated the evolution of dynamic foraging behaviors among the first animals known to have a central nervous system, Ediacaran microbial mat miners. I then followed this up by evolving more complex forms of learning. I soon encountered practical limitations of the current methods, including exponential demand of computational resources and genetic representations that were not conducive to further scaling. This type of complexity barrier has been a recurrent issue in digital evolution. Nature, however, is not limited in the same ways; through evolution, it has created a language to express robust, modular, and flexible control systems of arbitrary complexity and apparently open-ended evolvability. The essential features of this language can be captured in a digital evolution platform. As an early demonstration of this, I evolved biologically plausible regulatory systems for virtual cyanobacteria. These systems regulate the cells' growth, photosynthesis and replication given the daily light cycle, the cell's energy reserves, and levels of stress. Although simple, this experimental system displays dynamics and decision-making mechanisms akin to biology, with promising potential for open-ended evolution of cognition towards general intelligence.
Show less
- Title
- Towards Accurate Ranging and Versatile Authentication for Smart Mobile Devices
- Creator
- Li, Lingkun
- Date
- 2022
- Collection
- Electronic Theses & Dissertations
- Description
-
Internet of Things (IoTs) was rapidly developed during past years. Smart devices, such as smartphones, smartwatches, and smart assistants, which are equipped with smart chips as well as sensors, provide users with many easy used functions and lead them to a more convenient life. In this dissertation, we carefully studied the birefringence of the transparent tape, the nonlinear effects of the microphone, and the phase characteristic of the reflected ultrasound, and make use of such effects to...
Show moreInternet of Things (IoTs) was rapidly developed during past years. Smart devices, such as smartphones, smartwatches, and smart assistants, which are equipped with smart chips as well as sensors, provide users with many easy used functions and lead them to a more convenient life. In this dissertation, we carefully studied the birefringence of the transparent tape, the nonlinear effects of the microphone, and the phase characteristic of the reflected ultrasound, and make use of such effects to design three systems, RainbowLight, Patronus, and BreathPass, to provide users with accurate localization, privacy protection, and authentication, respectively.RainbowLight leverages observation direction-varied spectrum generated by a polarized light passing through a birefringence material, i.e., transparent tape, to provide localization service. We characterize the relationship between observe direction, light interference and the special spectrum, and using it to calculate the direction to a chip after taking a photo containing the chip. With multiple chips, RainbowLight designs a direction intersection based method to derive the location. In this dissertation, we build the theoretical basis of using polarized light and birefringence phenomenon to perform localization. Based on the theoretical model, we design and implement the RainbowLight on the mobile device, and evaluate the performance of the system. The evaluation results show that RainbowLight achieves 1.68 cm of the median error in the X-axis, 2 cm of the median error in the Y-axis, 5.74 cm of the median error in Z-axis, and 7.04 cm of the median error with the whole dimension.It is the first system that could only use the reflected lights in the space to perform visible light positioning. Patronus prevents unauthorized speech recording by leveraging the nonlinear effects of commercial off-the-shelf microphones. The inaudible ultrasound scramble interferes recording of unauthorized devices and can be canceled on authorized devices through an adaptive filter. In this dissertation, we carefully studied the nonlinear effects of ultrasound on commercial microphones. Based on the study, we proposed an optimized configuration to generate the scramble. It would provide privacy protection againist unauthorized recordings that does not disturb normal conversations. We designed, implemented a system including hardware and software components. Experiments results show that only 19.7% of words protected by Patronus' scramble can be recognized by unauthorized devices. Furthermore, authorized recordings have 1.6x higher perceptual evaluation of speech quality (PESQ) score and, on average, 50% lower speech recognition error rates than unauthorized recordings. BreathPass uses speakers to emit ultrasound signals. The signals are reflected off the chest wall and abdomen and then back to the microphone, which records the reflected signals. The system then extracts the fingerprints from the breathing pattern, and use these fingerprints to perform authentication. In this dissertation, we characterized the challenge of conducting authentication with the breathing pattern. After addressing these challenges, we designed such a system and implemented a proof-of-concept application on Android platform.We also conducted comprehensive experiments to evaluate the performance under different scenarios. BreathPass achieves an overall accuracy of 83%, a true positive rate of 73%, and a false positive rate of 5%, according to performance evaluation results. In general, this dissertation provides an enhanced ranging and versatile authentication systems of Internet of Things.
Show less
- Title
- Towards Robust and Reliable Communication for Millimeter Wave Networks
- Creator
- Zarifneshat, Masoud
- Date
- 2022
- Collection
- Electronic Theses & Dissertations
- Description
-
The future generations of wireless networks benefit significantly from millimeter wave technology (mmW) with frequencies ranging from about 30 GHz to 300 GHz. Specifically, the fifth generation of wireless networks has already implemented the mmW technology and the capacity requirements defined in 6G will also benefit from the mmW spectrum. Despite the attractions of the mmW technology, the mmW spectrum has some inherent propagation properties that introduce challenges. The first is that free...
Show moreThe future generations of wireless networks benefit significantly from millimeter wave technology (mmW) with frequencies ranging from about 30 GHz to 300 GHz. Specifically, the fifth generation of wireless networks has already implemented the mmW technology and the capacity requirements defined in 6G will also benefit from the mmW spectrum. Despite the attractions of the mmW technology, the mmW spectrum has some inherent propagation properties that introduce challenges. The first is that free space pathloss in mmW is more severe than that in the sub 6 GHz band. To make the mmW signal travel farther, communication systems need to use phased array antennas to concentrate the signal power to a limited direction in space at each given time. Directional communication can incur high overhead on the system because it needs to probe the space for finding signal paths. To have efficient communication in the mmW spectrum, the transmitter and the receiver should align their beams on strong signal paths which is a high overhead task. The second is a low diffraction of the mmW spectrum. The low diffraction causes almost any object including the human body to easily block the mmW signal degrading the mmW link quality. Avoiding and recovering from the blockage in the mmW communications, especially in dynamic environments, is particularly challenging because of the fast changes of the mmW channel. Due to the unique characteristics of the mmW propagation, the traditional user association methods perform poorly in the mmW spectrum. Therefore, we propose user association methods that consider the inherent propagation characteristics of the mmW signal. We first propose a method that collects the history of blockage incidents throughout the network and exploits the historical blockage incidents to associate user equipment to the base station with lower blockage possibility. The simulation results show that our proposed algorithm performs better in terms of improving the quality of the links and blockage rate in the network. User association based only on one objective may deteriorate other objectives. Therefore, we formulate a biobjective optimization problem to consider two objectives of load balance and blockage possibility in the network. We conduct Lagrangian dual analysis to decrease time complexity. The results show that our solution to the biobjective optimization problem has a better outcome compared to optimizing each objective alone. After we investigate the user association problem, we further look into the problem of maintaining a robust link between a transmitter and a receiver. The directional propagation of the mmW signal creates the opportunity to exploit multipath for a robust link. The main reasons for the link quality degradation are blockage and link movement. We devise a learning-based prediction framework to classify link blockage and link movement efficiently and quickly using diffraction values for taking appropriate mitigating actions. The simulations show that the prediction framework can predict blockage with close to 90% accuracy. The prediction framework will eliminate the need for time-consuming methods to discriminate between link movement and link blockage. After detecting the reason for the link degradation, the system needs to do the beam alignment on the updated mmW signal paths. The beam alignment on the signal paths is a high overhead task. We propose using signaling in another frequency band to discover the paths surrounding a receiver working in the mmW spectrum. In this way, the receiver does not have to do an expensive beam scan in the mmW band. Our experiments with off-the-shelf devices show that we can use a non-mmW frequency band's paths to align the beams in mmW frequency. In this dissertation, we provide solutions to the fundamental problems in mmW communication. We propose a user association method that is designed for mmW networks considering challenges of mmW signal. A closed-form solution for a biobjective optimization problem to optimize both blockage and load balance of the network is also provided. Moreover, we show that we can efficiently use the out-of-band signal to exploit multipath created in mmW communication. The future research direction includes investigating the methods proposed in this dissertation to solve some of the classic problems in the wireless networks that exist in the mmW spectrum.
Show less
- Title
- Towards Robust and Secure Face Recognition : Defense Against Physical and Digital Attacks
- Creator
- Deb, Debayan
- Date
- 2021
- Collection
- Electronic Theses & Dissertations
- Description
-
The accuracy, usability, and touchless acquisition of state-of-the-art automated face recognition systems (AFR) have led to their ubiquitous adoption in a plethora of domains, including mobile phone unlock, access control systems, and payment services. Despite impressive recognition performance, prevailing AFR systems remain vulnerable to the growing threat of face attacks which can be launched in both physical and digital domains. Face attacks can be broadly classified into three attack...
Show moreThe accuracy, usability, and touchless acquisition of state-of-the-art automated face recognition systems (AFR) have led to their ubiquitous adoption in a plethora of domains, including mobile phone unlock, access control systems, and payment services. Despite impressive recognition performance, prevailing AFR systems remain vulnerable to the growing threat of face attacks which can be launched in both physical and digital domains. Face attacks can be broadly classified into three attack categories: (i) Spoof attacks: artifacts in the physical domain (e.g., 3D masks, eye glasses, replaying videos), (ii) Adversarial attacks: imperceptible noises added to probes for evading AFR systems, and (iii) Digital manipulation attacks: entirely or partially modified photo-realistic faces using generative models. Each of these categories is composed of different attack types. For example, each spoof medium, e.g., 3D mask and makeup, constitutes one attack type. Likewise, in adversarial and digital manipulation attacks, each attack model, designed by unique objectives and losses, may be considered as one attack type. Thus, the attack categories and types form a 2-layer tree structure encompassing the diverse attacks. Such a tree will inevitably grow in the future. Given the growing dissemination of ``fake news” and "deepfakes", the research community and social media platforms alike are pushing towards generalizable defense against continuously evolving and sophisticated face attacks. In this dissertation, we first propose a set of defense methods that achieve state-of-the-art performance in detecting attack types within individual attack categories, both physical (e.g., face spoofs) and digital (e.g., adversarial faces and digital manipulation), then introduce a method for simultaneously safeguarding against each attack.First, in an effort to impart generalizability and interpretability to face spoof detection systems, we propose a new face anti-spoofing framework specifically designed to detect unknown spoof types, namely, Self-Supervised Regional Fully Convolutional Network (SSR-FCN), that is trained to learn local discriminative cues from a face image in a self-supervised manner. The proposed framework improves generalizability while maintaining the computational efficiency of holistic face anti-spoofing approaches (< 4 ms on a Nvidia GTX 1080Ti GPU). The proposed method is also interpretable since it localizes which parts of the face are labeled as spoofs. Experimental results show that SSR-FCN can achieve True Detection Rate (TDR) = 65% @ 2.0% False Detection Rate (FDR) when evaluated on a dataset comprising of 13 different spoof types under unknown attacks while achieving competitive performances under standard benchmark face anti-spoofing datasets (Oulu-NPU, CASIA-MFSD, and Replay-Attack).Next, we address the problem of defending against adversarial attacks. We first propose, AdvFaces, an automated adversarial face synthesis method that learns to generate minimal perturbations in the salient facial regions. Once AdvFaces is trained, it can automatically evade state-of-the-art face matchers with attack success rates as high as 97.22% and 24.30% at 0.1% FAR for obfuscation and impersonation attacks, respectively. We then propose a new self-supervised adversarial defense framework, namely FaceGuard, that can automatically detect, localize, and purify a wide variety of adversarial faces without utilizing pre-computed adversarial training samples. FaceGuard automatically synthesizes diverse adversarial faces, enabling a classifier to learn to distinguish them from bona fide faces. Concurrently, a purifier attempts to remove the adversarial perturbations in the image space. FaceGuard can achieve 99.81%, 98.73%, and 99.35% detection accuracies on LFW, CelebA, and FFHQ, respectively, on six unseen adversarial attack types.Finally, we take the first steps towards safeguarding AFR systems against face attacks in both physical and digital domains. We propose a new unified face attack detection framework, namely UniFAD, which automatically clusters similar attacks and employs a multi-task learning framework to learn salient features to distinguish between bona fides and coherent attack types. The proposed UniFAD can detect face attacks from 25 attack types across all 3 attack categories with TDR = 94.73% @ 0.2% FDR on a large fake face dataset, namely GrandFake. Further, UniFAD can identify whether attacks are adversarial, digitally manipulated, or contain spoof artifacts, with 97.37% classification accuracy.
Show less
- Title
- Towards a Robust Unconstrained Face Recognition Pipeline with Deep Neural Networks
- Creator
- Shi, Yichun
- Date
- 2021
- Collection
- Electronic Theses & Dissertations
- Description
-
Face recognition is a classic problem in the field of computer vision and pattern recognition due to its wide applications in real-world problems such as access control, identity verification, physical security, surveillance, etc. Recent progress in deep learning techniques and the access to large-scale face databases has lead to a significant improvement of face recognition accuracy under constrained and semi-constrained scenarios. Deep neural networks are shown to surpass human performance...
Show moreFace recognition is a classic problem in the field of computer vision and pattern recognition due to its wide applications in real-world problems such as access control, identity verification, physical security, surveillance, etc. Recent progress in deep learning techniques and the access to large-scale face databases has lead to a significant improvement of face recognition accuracy under constrained and semi-constrained scenarios. Deep neural networks are shown to surpass human performance on Labeled Face in the Wild (LFW), which consists of celebrity photos captured in the wild. However, in many applications, e.g. surveillance videos, where we cannot assume that the presented face is under controlled variations, the performance of current DNN-based methods drop significantly. The main challenges in such an unconstrained face recognition problem include, but are not limited to: lack of labeled data, robust face normalization, discriminative representation learning and the ambiguity of facial features caused by information loss.In this thesis, we propose a set of methods that attempt to address the above challenges in unconstrained face recognition systems. Starting from a classic deep face recognition pipeline, we review how each step in this pipeline could fail on low-quality uncontrolled input faces, what kind of solutions have been studied before, and then introduce our proposed methods. The various methods proposed in this thesis are independent but compatible with each other. Experiment on several challenging benchmarks, e.g. IJB-C and IJB-S show that the proposed methods are able to improve the robustness and reliability of deep unconstrained face recognition systems. Our solution achieves state-of-the-art performance, i.e. 95.0\% TAR@FAR=0.001\% on IJB-C dataset and 61.98\% Rank1 retrieval rate on the surveillance-to-booking protocol of IJB-S dataset.
Show less
- Title
- UNDERSTANDING THE GENETIC BASIS OF HUMAN DISEASES BY COMPUTATIONALLY MODELING THE LARGE-SCALE GENE REGULATORY NETWORKS
- Creator
- Wang, Hao
- Date
- 2022
- Collection
- Electronic Theses & Dissertations
- Description
-
Many severe diseases are known to be caused by the genetic disorder of the human genome, including breast cancer and Alzheimer's disease. Understanding the genetic basis of human diseases plays a vital role in personalized medicine and precision therapy. However, the pervasive spatial correlations between the disease-associated SNPs have hindered the ability of traditional GWAS studies to discover causal SNPs and obscured the underlying mechanisms of disease-associated SNPs. Recently, diverse...
Show moreMany severe diseases are known to be caused by the genetic disorder of the human genome, including breast cancer and Alzheimer's disease. Understanding the genetic basis of human diseases plays a vital role in personalized medicine and precision therapy. However, the pervasive spatial correlations between the disease-associated SNPs have hindered the ability of traditional GWAS studies to discover causal SNPs and obscured the underlying mechanisms of disease-associated SNPs. Recently, diverse biological datasets generated by large data consortia provide a unique opportunity to fill the gap between genotypes and phenotypes using biological networks, representing the complex interplay between genes, enhancers, and transcription factors (TF) in the 3D space. The comprehensive delineation of the regulatory landscape calls for highly scalable computational algorithms to reconstruct the 3D chromosome structures and mechanistically predict the enhancer-gene links. In this dissertation, I first developed two algorithms, FLAMINGO and tFLAMINGO, to reconstruct the high-resolution 3D chromosome structures. The algorithmic advancements of FLAMINGO and tFLAMINGO lead to the reconstruction of the 3D chromosome structures in an unprecedented resolution from the highly sparse chromatin contact maps. I further developed two integrative algorithms, ComMUTE and ProTECT, to mechanistically predict the long-range enhancer-gene links by modeling the TF profiles. Based on the extensive evaluations, these two algorithms demonstrate superior performance in predicting enhancer-gene links and decoding TF regulatory grammars over existing algorithms. The successful application of ComMUTE and ProTECT in 127 cell types not only provide a rich resource of gene regulatory networks but also shed light on the mechanistic understanding of QTLs, disease-associated genetic variants, and high-order chromatin interactions.
Show less
- Title
- Using Eventual Consistency to Improve the Performance of Distributed Graph Computation In Key-Value Stores
- Creator
- Nguyen, Duong Ngoc
- Date
- 2021
- Collection
- Electronic Theses & Dissertations
- Description
-
Key-value stores have gained increasing popularity due to their fast performance and simple data model. A key-value store usually consists of multiple replicas located in different geographical regions to provide higher availability and fault tolerance. Consequently, a protocol is employed to ensure that data are consistent across the replicas.The CAP theorem states the impossibility of simultaneously achieving three desirable properties in a distributed system, namely consistency,...
Show moreKey-value stores have gained increasing popularity due to their fast performance and simple data model. A key-value store usually consists of multiple replicas located in different geographical regions to provide higher availability and fault tolerance. Consequently, a protocol is employed to ensure that data are consistent across the replicas.The CAP theorem states the impossibility of simultaneously achieving three desirable properties in a distributed system, namely consistency, availability, and network partition tolerance. Since failures are a norm in distributed systems and the capability to maintain the service at an acceptable level in the presence of failures is a critical dependability and business requirement of any system, the partition tolerance property is a necessity. Consequently, the trade-off between consistency and availability (performance) is inevitable. Strong consistency is attained at the cost of slow performance and fast performance is attained at the cost of weak consistency, resulting in a spectrum of consistency models suitable for different needs. Among the consistency models, sequential consistency and eventual consistency are two common ones. The former is easier to program with but suffers from poor performance whereas the latter suffers from potential data anomalies while providing higher performance.In this dissertation, we focus on the problem of what a designer should do if he/she is asked to solve a problem on a key-value store that provides eventual consistency. Specifically, we are interested in the approaches that allow the designer to run his/her applications on an eventually consistent key-value store and handle data anomalies if they occur during the computation. To that end, we investigate two options: (1) Using detect-rollback approach, and (2) Using stabilization approach. In the first option, the designer identifies a correctness predicate, say $\Phi$, and continues to run the application as if it was running on sequential consistency, as our system monitors $\Phi$. If $\Phi$ is violated (because the underlying key-value store provides eventual consistency), the system rolls back to a state where $\Phi$ holds and the computation is resumed from there. In the second option, the data anomalies are treated as state perturbations and handled by the convergence property of stabilizing algorithms.We choose LinkedIn's Voldemort key-value store as the example key-value store for our study. We run experiments with several graph-based applications on Amazon AWS platform to evaluate the benefits of the two approaches. From the experiment results, we observe that overall, both approaches provide benefits to the applications when compared to running the applications on sequential consistency. However, stabilization provides higher benefits, especially in the aggressive stabilization mode which trades more perturbations for no locking overhead.The results suggest that while there is some cost associated with making an algorithm stabilizing, there may be a substantial benefit in revising an existing algorithm for the problem at hand to make it stabilizing and reduce the overall runtime under eventual consistency.There are several directions of extension. For the detect-rollback approach, we are working to develop a more general rollback mechanism for the applications and improve the efficiency and accuracy of the monitors. For the stabilization approach, we are working to develop an analytical model for the benefits of eventual consistency in stabilizing programs. Our current work focuses on silent stabilization and we plan to extend our approach to other variations of stabilization.
Show less
- Title
- VISIONING THE AGRICULTURE BLOCKCHAIN : THE ROLE AND RISE OF BLOCKCHAIN IN THE COMMERCIAL POULTRY INDUSTRY
- Creator
- Fennell, Chris
- Date
- 2022
- Collection
- Electronic Theses & Dissertations
- Description
-
Blockchain is an emerging technology that is being explored by technologists and industry leaders as a way to revolutionize the agriculture supply chain. The problem is that human and ecological insights are needed to understand the complexities of how blockchain could fulfill these visions. In this work, I assert how the blockchain's promising vision of traceability, immutability and distributed properties presents advancements and challenges to rural farming. This work wrestles with the...
Show moreBlockchain is an emerging technology that is being explored by technologists and industry leaders as a way to revolutionize the agriculture supply chain. The problem is that human and ecological insights are needed to understand the complexities of how blockchain could fulfill these visions. In this work, I assert how the blockchain's promising vision of traceability, immutability and distributed properties presents advancements and challenges to rural farming. This work wrestles with the more subtle ways the blockchain technology would be integrated into the existing infrastructure. Through interviews and participatory design workshops, I talked with an expansive set of stakeholders including Amish farmers, contract growers, senior leadership and field supervisors. This research illuminates that commercial poultry farming is such a complex and diffuse system that any overhaul of its core infrastructure will be difficult to ``roll back'' once blockchain is ``rolled out.'' Through an HCI and sociotechnical system perspective, drawing particular insights from Science and Technology Studies theories of infrastructure and breakdown, this dissertation asserts three main concerns. First, this dissertation uncovers the dominant narratives on the farm around revision and ``roll back'' of blockchain, connecting to theories of version control from computer science. Second, this work uncovers that a core concern of the poultry supply chain is death and I reveal the sociotechnical and material implications for the integration of blockchain. Finally, this dissertation discusses the meaning of ``security’’ for the poultry supply chain in which biosecurity is prioritized over cybersecurity and how blockchain impacts these concerns. Together these findings point to significant implications for designers of blockchain infrastructure and how rural workers will integrate the technology into the supply chain.
Show less
- Title
- Variational Bayes inference of Ising models and their applications
- Creator
- Kim, Minwoo
- Date
- 2022
- Collection
- Electronic Theses & Dissertations
- Description
-
Ising models originated in statistical physics have been widely used in modeling spatialdata and computer vision problems. However, statistical inference of this model and its application to many practical fields remain challenging due to intractable nature of the normalizing constant in the likelihood. This dissertation consists of two main themes, (1) parameter estimation of Ising model and (2) structured variable selection based on the Ising model using variational Bayes (VB).In Chapter 1,...
Show moreIsing models originated in statistical physics have been widely used in modeling spatialdata and computer vision problems. However, statistical inference of this model and its application to many practical fields remain challenging due to intractable nature of the normalizing constant in the likelihood. This dissertation consists of two main themes, (1) parameter estimation of Ising model and (2) structured variable selection based on the Ising model using variational Bayes (VB).In Chapter 1, we review the background, research questions and development of Isingmodel, variational Bayes, and other statistical concepts. An Ising model basically deal with a binary random vector in which each component is dependent on its neighbors. There exist various versions of Ising model depending on parameterization and neighboring structure. In Chapter 2, with two-parameter Ising model, we describe a novel procedure for the pa- rameter estimation based on VB which is computationally efficient and accurate compared to existing methods. Traditional pseudo maximum likelihood estimate (PMLE) can pro- vide accurate results only for smaller number of neighbors. A Bayesian approach based on Markov chain Monte Carlo (MCMC) performs better even with a large number of neighbors. Computational costs of MCMC, however, are quite expensive in terms of time. Accordingly, we propose a VB method with two variational families, mean-field (MF) Gaussian family and bivariate normal (BN) family. Extensive simulation studies validate the efficacy of the families. Using our VB methods, computing times are remarkably decreased without dete- rioration in performance accuracy, or in some scenarios we get much more accurate output. In addition, we demonstrates theoretical properties of the proposed VB method under MF family. The main theoretical contribution of our work lies in establishing the consistency of the variational posterior for the Ising model with the true likelihood replaced by the pseudo- likelihood. Under certain conditions, we first derive the rates at which the true posterior based on the pseudo-likelihood concentrates around the εn- shrinking neighborhoods of the true parameters. With a suitable bound on the Kullback-Leibler distance between the true and the variational posterior, we next establish the rate of contraction for the variational pos- terior and demonstrate that the variational posterior also concentrates around εn-shrinking neighborhoods of the true parameter.In Chapter 3, we propose a Bayesian variable selection technique for a regression setupin which the regression coefficients hold structural dependency. We employ spike and slab priors on the regression coefficients as follows: (i) In order to capture the intrinsic structure, we first consider Ising prior on latent binary variables. If a latent variable takes one, the corresponding regression coefficient is active, otherwise, it is inactive. (ii) Employing spike and slab prior, we put Gaussian priors (slab) on the active coefficients and inactive coefficients will be zeros with probability one (spike).
Show less