You are here
Search results
(21 - 39 of 39)
Pages
- Title
- Investigating the Role of Sensor Based Technologies to Support Domestic Activities in Sub-Saharan Africa
- Creator
- Chidziwisano, George Hope
- Date
- 2022
- Collection
- Electronic Theses & Dissertations
- Description
-
In sub-Saharan Africa (SSA), homes face various challenges including insecurity, unreliable power supply, and extreme weather conditions. While the use of sensor-based technologies is increasing in industrialized countries, it is unclear how they can be used to support domestic activities in SSA. The availability of low-cost sensors and the widespread adoption of mobile phones presents an opportunity to collect real-time data and utilize proactive methods to monitor these challenges. This...
Show moreIn sub-Saharan Africa (SSA), homes face various challenges including insecurity, unreliable power supply, and extreme weather conditions. While the use of sensor-based technologies is increasing in industrialized countries, it is unclear how they can be used to support domestic activities in SSA. The availability of low-cost sensors and the widespread adoption of mobile phones presents an opportunity to collect real-time data and utilize proactive methods to monitor these challenges. This dissertation presents three studies that build upon each other to explore the role of sensor-based technologies in SSA. I used a technology probes method to develop three sensor-based systems that support domestic security (M-Kulinda), power blackout monitoring (GridAlert) and poultry farming (NkhukuApp). I deployed M-Kulinda in 20 Kenyan homes, GridAlert in 18 Kenyan homes, and NkhukuProbe in 15 Malawian home-based chicken coops for one month. I used interview, observation, diary, and data logging methods to understand participants’ experiences using the probes. Findings from these studies suggest that people in Kenya and Malawi want to incorporate sensor-based technologies into their everyday activities, and they quickly find unexpected ways to use them. Participants’ interactions with the probes prompted detailed reflections about how they would integrate sensor-based technologies in their homes (e.g., monitoring non-digital tools). These reflections are useful for motivating new design concepts in HCI. I use these findings to motivate a discussion about unexplored areas that could benefit from sensor-based technologies. Further, I discuss recommendations for designing sensor-based technologies that support activities in some Kenyan and Malawian homes. This research contributes to HCI by providing design implications for sensor-based applications in Kenyan and Malawian homes, employing a technology probes method in a non-traditional context, and developing prototypes of three novel systems.
Show less
- Title
- I AM DOING MORE THAN CODING : A QUALITATIVE STUDY OF BLACK WOMEN HBCU UNDERGRADUATES’ PERSISTENCE IN COMPUTING
- Creator
- Benton, Amber V.
- Date
- 2020
- Collection
- Electronic Theses & Dissertations
- Description
-
The purpose of my study is to explore why and how Black women undergraduates at historically Black colleges and universities (HBCUs) persist in computing. By centering the experiences of Black women undergraduates and their stories, this dissertation expands traditional, dominant ways of understanding student persistence in higher education. Critical Race Feminism (CRF) was applied as a conceptual framework to the stories of 11 Black women undergraduates in computing and drew on the small...
Show moreThe purpose of my study is to explore why and how Black women undergraduates at historically Black colleges and universities (HBCUs) persist in computing. By centering the experiences of Black women undergraduates and their stories, this dissertation expands traditional, dominant ways of understanding student persistence in higher education. Critical Race Feminism (CRF) was applied as a conceptual framework to the stories of 11 Black women undergraduates in computing and drew on the small stories qualitative approach to examine the day-to-day experiences of Black women undergraduates at HBCUs as they persisted in their computing degree programs. The findings suggest that: (a) gender underrepresentation in computing affects Black women’s experiences, (b) computing culture at HBCUs directly affect Black women in computing, (c) Black women need access to resources and opportunities to persist in computing, (d) computing-related internships are beneficial professional opportunities but are also sites of gendered racism for Black women, (e) connectedness between Black people is innate but also needs to be fostered, (f) Black women want to engage in computing that contributes to social impact and community uplift, and (g) science identity is not a primary identity for Black women in computing. This paper also argues that disciplinary focused efforts contribute to the persistence of Black women in computing.
Show less
- Title
- High-precision and Personalized Wearable Sensing Systems for Healthcare Applications
- Creator
- Tu, Linlin
- Date
- 2022
- Collection
- Electronic Theses & Dissertations
- Description
-
The cyber-physical system (CPS) has been discussed and studied extensively since 2010. It provides various solutions for monitoring the user's physical and psychological health states, enhancing the user's experience, and improving the lifestyle. A variety of mobile internet devices with built-in sensors, such as accelerators, cameras, PPG sensors, pressure sensors, and the microphone, can be leveraged to build mobile cyber-physical applications that collected sensing data from the real world...
Show moreThe cyber-physical system (CPS) has been discussed and studied extensively since 2010. It provides various solutions for monitoring the user's physical and psychological health states, enhancing the user's experience, and improving the lifestyle. A variety of mobile internet devices with built-in sensors, such as accelerators, cameras, PPG sensors, pressure sensors, and the microphone, can be leveraged to build mobile cyber-physical applications that collected sensing data from the real world, had data processed, communicated to the internet services and transformed into behavioral and physiological models. The detected results can be used as feedback to help the user understand his/her behavior, improve the lifestyle, or avoid danger. They can also be delivered to therapists to facilitate their diagnose. Designing CPS for health monitoring is challenging due to multiple factors. First of all, high estimation accuracy is necessary for health monitoring. However, some systems suffer irregular noise. For example, PPG sensors for cardiac health state monitoring are extremely vulnerable to motion noise. Second, to include human in the loop, health monitoring systems are required to be user-friendly. However, some systems involve cumbersome equipment for a long time of data collection, which is not feasible for daily monitoring. Most importantly, large-scale high-level health-related monitoring systems, such as the systems for human activity recognition, require high accuracy and communication efficiency. However, with users' raw data uploading to the server, centralized learning fails to protect users' private information and is communication-inefficient. The research introduced in this dissertation addressed the above three significant challenges in developing health-related monitoring systems. We build a lightweight system for accurate heart rate measurement during exercise, design a smart in-home breathing training system with bio-Feedback via virtual reality (VR) game, and propose federated learning via dynamic layer sharing for human activity recognition.
Show less
- Title
- Gender-related effects of advanced placement computer science courses on self-efficacy, belongingness, and persistence
- Creator
- Good, Jonathon Andrew
- Date
- 2018
- Collection
- Electronic Theses & Dissertations
- Description
-
The underrepresentation of women in computer science has been a concern of educators for multiple decades. The low representation of women in the computer science is a pattern from K-12 schools through the university level and profession. One of the purposes of the introduction of the Advanced Placement Computer Science Principles (APCS-P) course in 2016 was to help broaden participation in computer science at the high school level. The design of APCS-P allowed teachers to present computer...
Show moreThe underrepresentation of women in computer science has been a concern of educators for multiple decades. The low representation of women in the computer science is a pattern from K-12 schools through the university level and profession. One of the purposes of the introduction of the Advanced Placement Computer Science Principles (APCS-P) course in 2016 was to help broaden participation in computer science at the high school level. The design of APCS-P allowed teachers to present computer science from a broad perspective, allowing students to pursue problems of personal significance, and allowing for computing projects to take a variety of forms. The nationwide enrollment statistics for Advanced Placement Computer Science Principles in 2017 had a higher proportion of female students (30.7%) than Advanced Placement Computer Science A (23.6%) courses. However, it is unknown to what degree enrollment in these courses was related to students’ plans to enroll in future computer science courses. This correlational study examined how students’ enrollment in Advanced Placement Computer Science courses, along with student gender, predicted students’ sense of computing self-efficacy, belongingness, and expected persistence in computer science. A nationwide sample of 263 students from 10 APCS-P and 10 APCS-A courses participated in the study. Students completed pre and post surveys at the beginning and end of their Fall 2017 semester regarding their computing self-efficacy, belongingness, and plans to continue in computer science studies. Using hierarchical linear modeling analysis due to the nested nature of the data within class sections, the researcher found that the APCS course type was not predictive of self-efficacy, belongingness, or expectations to persist in computer science. The results suggested that female students’ self-efficacy declined over the course of the study. However, gender was not predictive of belongingness or expectations to persist in computer science. Students were found to have entered into both courses with high a sense of self-efficacy, belongingness, and expectation to persist in computer science.The results from this suggests that students enrolled in both Advanced Placement Computer Science courses are already likely to pursue computer science. I also found that the type of APCS course in which students enroll does not relate to students’ interest in computer science. This suggests that educators should look beyond AP courses as a method of exposing students to computer science, possibly through efforts such as computational thinking and cross-curricular uses of computer science concepts and practices. Educators and administrators should also continue to examine whether there are structural biases in how students are directed to computer science courses. As for the drop in self-efficacy related to gender, this in alignment with previous research suggesting that educators should carefully scaffold students’ initial experiences in the course to not negatively influence their self-efficacy. Further research should examine how specific pedagogical practices could influence students’ persistence, as the designation and curriculum of APCS-A or APCS-P alone may not capture the myriad of ways in which teachers may be addressing gender inequity in their classrooms. Research can also examine how student interest in computer science is affected at an earlier age, as the APCS courses may be reaching students after they have already formed their opinions about computer science as a field.
Show less
- Title
- Finding optimized bounding boxes of polytopes in d-dimensional space and their properties in k-dimensional projections
- Creator
- Shahid, Salman (Of Michigan State University)
- Date
- 2014
- Collection
- Electronic Theses & Dissertations
- Description
-
Using minimal bounding boxes to encapsulate or approximate a set of points in d-dimensional space is a non-trivial problem that has applications in a variety of fields including collision detection, object rendering, high dimensional databases and statistical analysis to name a few. While a significant amount of work has been done on the three dimensional variant of the problem (i.e. finding the minimum volume bounding box of a set of points in three dimensions), it is difficult to find a...
Show moreUsing minimal bounding boxes to encapsulate or approximate a set of points in d-dimensional space is a non-trivial problem that has applications in a variety of fields including collision detection, object rendering, high dimensional databases and statistical analysis to name a few. While a significant amount of work has been done on the three dimensional variant of the problem (i.e. finding the minimum volume bounding box of a set of points in three dimensions), it is difficult to find a simple method to do the same for higher dimensions. Even in three dimensions existing methods suffer from either high time complexity or suboptimal results with a speed up in execution time. In this thesis we present a new approach to find the optimized minimum bounding boxes of a set of points defining convex polytopes in d-dimensional space. The solution also gives the optimal bounding box in three dimensions with a much simpler implementation while significantly speeding up the execution time for a large number of vertices. The basis of the proposed approach is a series of unique properties of the k-dimensional projections that are leveraged into an algorithm. This algorithm works by constructing the convex hulls of a given set of points and optimizing the projections of those hulls in two dimensional space using the new concept of Simultaneous Local Optimal. We show that the proposed algorithm provides significantly better performances than those of the current state of the art approach on the basis of time and accuracy. To illustrate the importance of the result in terms of a real world application, the optimized bounding box algorithm is used to develop a method for carrying out range queries in high dimensional databases. This method uses data transformation techniques in conjunction with a set of heuristics to provide significant performance improvement.
Show less
- Title
- Face Recognition : Representation, Intrinsic Dimensionality, Capacity, and Demographic Bias
- Creator
- Gong, Sixue
- Date
- 2021
- Collection
- Electronic Theses & Dissertations
- Description
-
Face recognition is a widely adopted technology with numerous applications, such as mobile phone unlock, mobile payment, surveillance, social media and law enforcement. There has been tremendous progress in enhancing the accuracy of face recognition systems over the past few decades, much of which can be attributed to deep learning. Despite this progress, several fundamental problems in face recognition still remain unsolved. These problems include finding a salient representation, estimating...
Show moreFace recognition is a widely adopted technology with numerous applications, such as mobile phone unlock, mobile payment, surveillance, social media and law enforcement. There has been tremendous progress in enhancing the accuracy of face recognition systems over the past few decades, much of which can be attributed to deep learning. Despite this progress, several fundamental problems in face recognition still remain unsolved. These problems include finding a salient representation, estimating intrinsic dimensionality, representation capacity, and demographic bias. With growing applications of face recognition, the need for an accurate, robust, compact and fair representation is evident.In this thesis, we first develop algorithms to obtain practical estimates of intrinsic dimensionality of face representations, and propose a new dimensionality reduction method to project feature vectors from ambient space to intrinsic space. Based on the study in intrinsic dimensionality, we then estimate capacity of face representation, casting the face capacity estimation problem under the information theoretic framework of capacity of a Gaussian noise channel. Numerical experiments on unconstrained faces (IJB-C) provide a capacity upper bound of 27,000 for FaceNet and 84,000 for SphereFace representation at 1% FAR. In the second part of the thesis, we address the demographic bias problem in face recognition systems where errors are lower on certain cohorts belonging to specific demographic groups. We propose two de-biasing frameworks that extract feature representations to improve fairness in face recognition. Experiments on benchmark face datasets (RFW, LFW, IJB-A, and IJB-C) show that our approaches are able to mitigate face recognition bias on various demographic groups (biasness drops from 6.83 to 5.07) as well as maintain the competitive performance (i.e., 99.75% on LFW, and 93.70% TAR @ 0.1% FAR on IJB-C). Lastly, we explore the global distribution of deep face representations derived from correlations between image samples of within-class and cross-class to enhance the discriminativeness of face representation of each identity in the embedding space. Our new approach to face representation achieves state-of-the-art performance for both verification and identification tasks on benchmark datasets (99.78% on LFW, 93.40% on CPLFW, 98.41% on CFP-FP, 96.2% TAR @ 0.01% FAR and 95.3% Rank-1 accuracy on IJB-C). Since, the primary techniques we employ in this dissertation are not specific to faces only, we believe our research can be extended to other problems in computer vision, for example, general image classification and representation learning.
Show less
- Title
- Evolving Phenotypically Plastic Digital Organisms
- Creator
- Lalejini, Alexander
- Date
- 2021
- Collection
- Electronic Theses & Dissertations
- Description
-
The ability to dynamically respond to cues from the environment is a fundamental feature of most adaptive systems. In biological systems, changes to an organism based on environmental cues is called phenotypic plasticity. Indeed, phenotypic plasticity underlies many of the adaptive traits and developmental patterns found in nature and serves as a key mechanism for responding to spatially or temporally variable environments. Most computer programs require phenotypic plasticity, as they must...
Show moreThe ability to dynamically respond to cues from the environment is a fundamental feature of most adaptive systems. In biological systems, changes to an organism based on environmental cues is called phenotypic plasticity. Indeed, phenotypic plasticity underlies many of the adaptive traits and developmental patterns found in nature and serves as a key mechanism for responding to spatially or temporally variable environments. Most computer programs require phenotypic plasticity, as they must respond dynamically to stimuli such as user input, sensor data, et cetera. As such, phenotypic plasticity also has practical applications in genetic programming, wherein we apply the natural principles of evolution to automatically synthesize computer programs rather than writing them by hand. In this dissertation, I achieve two synergistic aims: (1) I use populations of self-replicating computer programs (digital organisms) to empirically study the conditions under which adaptive phenotypic plasticity evolves and how its evolution shapes subsequent evolutionary outcomes; and (2) I transfer insights from biology to develop novel genetic programming techniques in order to evolve more responsive (i.e., phenotypically plastic) computer programs. First, I illustrate the importance of mutation rate, environmental change, and partially-plastic building blocks for the evolution of adaptive plasticity. Next, I show that adaptive phenotypic plasticity stabilizes populations against environmental change, allowing them to more easily retain novel adaptive traits. Finally, I improve our ability to evolve phenotypically plastic computer programs with three novel genetic programming techniques: (1) SignalGP, which provides mechanisms to control code expression based on environmental cues, (2) tag-based genetic regulation to adjust code expression based on current context, and (3) tag-accessed memory to provide more dynamic mechanisms for storing data.
Show less
- Title
- Evolution of distributed behavior
- Creator
- Knoester, David B.
- Date
- 2011
- Collection
- Electronic Theses & Dissertations
- Description
-
In this dissertation, we describe a study in the evolution of distributed behavior, where evolutionary algorithms are used to discover behaviors for distributed computing systems. We define distributed behavior as that in which groups of individuals must both cooperate in working towards a common goal and coordinate their activities in a harmonious fashion. As such, communication among individuals is necessarily a key component of distributed behavior, and we have identified three classes of...
Show moreIn this dissertation, we describe a study in the evolution of distributed behavior, where evolutionary algorithms are used to discover behaviors for distributed computing systems. We define distributed behavior as that in which groups of individuals must both cooperate in working towards a common goal and coordinate their activities in a harmonious fashion. As such, communication among individuals is necessarily a key component of distributed behavior, and we have identified three classes of distributed behavior that require communication: data-driven behaviors, where semantically meaningful data is transmitted between individuals; temporal behaviors, which are based on the relative timing of individuals' actions; and structural behaviors, which are responsible for maintaining the underlying communication network connecting individuals. Our results demonstrate that evolutionary algorithms can discover groups of individuals that exhibit each of these different classes of distributed behavior, and that these behaviors can be discovered both in isolation (e.g., evolving a purely data-driven algorithm) and in concert (e.g., evolving an algorithm that includes both data-driven and structural behaviors). As part of this research, we show that evolutionary algorithms can discover novel heuristics for distributed computing, and hint at a new class of distributed algorithm enabled by such studies.The majority of this research was conducted with the Avida platform for digital evolution, a system that has been proven to aid researchers in understanding the biological process of evolution by natural selection. For this reason, the results presented in this dissertation provide the foundation for future studies that examine how distributed behaviors evolved in nature. The close relationship between evolutionary biology and evolutionary algorithms thus aids our study of evolving algorithms for the next generation of distributed computing systems.
Show less
- Title
- EXTENDED REALITY (XR) & GAMIFICATION IN THE CONTEXT OF THE INTERNET OF THINGS (IOT) AND ARTIFICIAL INTELLIGENCE (AI)
- Creator
- Pappas, Georgios
- Date
- 2021
- Collection
- Electronic Theses & Dissertations
- Description
-
The present research develops a holistic framework for and way of thinking about Deep Technologies related to Gamification, eXtended Reality (XR), the Internet of Things (IoT), and Artificial Intelligence (AI). Starting with the concept of gamification and the immersive technology of XR, we create interconnections with the IoT and AI implementations. While each constituent technology has its own unique impact, our approach uniquely addresses the combinational potential of these technologies...
Show moreThe present research develops a holistic framework for and way of thinking about Deep Technologies related to Gamification, eXtended Reality (XR), the Internet of Things (IoT), and Artificial Intelligence (AI). Starting with the concept of gamification and the immersive technology of XR, we create interconnections with the IoT and AI implementations. While each constituent technology has its own unique impact, our approach uniquely addresses the combinational potential of these technologies that may have greater impact than any technology on its own. To approach the research problem more efficiently, the methodology followed includes its initial division into smaller parts. For each part of the research problem, novel applications were designed and developed including gamified tools, serious games and AR/VR implementations. We apply the proposed framework in two different domains: autonomous vehicles (AVs), and distance learning.Specifically, in chapter 2, an innovative hybrid tool for distance learning is showcased where, among others, the fusion with IoT provides a novel pseudomultiplayer mode. This mode may transform advanced asynchronous gamified tools to synchronous by enabling or disabling virtual events and phenomena enhancing the student experience. Next, in Chapter 3, along with gamification, the combination of XR with IoT data streams is presented but this time in an automotive context. We showcase how this fusion of technologies provides low-latency monitoring of vehicle characteristics, and how this can be visualized in augmented and virtual reality using low-cost hardware and services. This part of our proposed framework provides the methodology of creating any type of Digital Twin with near real-time data visualization.Following that, in chapter 4 we establish the second part of the suggested holistic framework where Virtual Environments (VEs), in general, can work as synthetic data generators and thus, be a great source of artificial suitable for training AI models. This part of the research includes two novel implementations the Gamified Digital Simulator (GDS) and the Virtual LiDAR Simulator.Having established the holistic framework, in Chapter 5, we now “zoom in” to gamification exploring deeper aspects of virtual environments and discuss how serious games can be combined with other facets of virtual layers (cyber ranges,virtual learning environments) to provide enhanced training and advanced learning experiences. Lastly, in chapter 6, “zooming out” from gamification an additional enhancement layer is presented. We showcase the importance of human-centered design of via an implementation that tries to simulate the AV-pedestrian interactions in a virtual and safe environment.
Show less
- Title
- EFFICIENT AND PORTABLE SPARSE SOLVERS FOR HETEROGENEOUS HIGH PERFORMANCE COMPUTING SYSTEMS
- Creator
- Rabbi, Md Fazlay
- Date
- 2022
- Collection
- Electronic Theses & Dissertations
- Description
-
Sparse matrix computations arise in the form of the solution of systems of linear equations, matrix factorization, linear least-squares problems, and eigenvalue problems in numerous computational disciplines ranging from quantum many-body problems, computational fluid dynamics, machine learning and graph analytics. The scale of problems in these scientific applications typically necessitates execution on massively parallel architectures. Moreover, due to the irregular data access patterns and...
Show moreSparse matrix computations arise in the form of the solution of systems of linear equations, matrix factorization, linear least-squares problems, and eigenvalue problems in numerous computational disciplines ranging from quantum many-body problems, computational fluid dynamics, machine learning and graph analytics. The scale of problems in these scientific applications typically necessitates execution on massively parallel architectures. Moreover, due to the irregular data access patterns and low arithmetic intensities of sparse matrix computations, achieving high performance and scalability is very difficult. These challenges are further exacerbated by the increasingly complex deep memory hierarchies of the modern architectures as they typically integrate several layers of memory storage. Data movement is an important bottleneck against efficiency and energy consumption in large-scale sparse matrix computations. Minimizing data movement across layers of the memory and overlapping data movement with computations are keys to achieving high performance in sparse matrix computations. My thesis work contributes towards systematically identifying algorithmic challenges of the sparse solvers and providing optimized and high performing solutions for both shared memory architectures and heterogeneous architectures by minimizing data movements between different memory layers. For this purpose, we first introduce a shared memory task-parallel framework focusing on optimizing the entire solvers rather than a specific kernel. As most of the recent (or upcoming) supercomputers are equipped with Graphics Processing Unit (GPU), we decided to evaluate the efficacy of the directive-based programming models (i.e., OpenMP and OpenACC) in offloading computations on GPU to achieve performance portability. Being inspired by the promising results of this work, we port and optimize our shared memory task-parallel framework on GPU accelerated systems to execute problem sizes that exceed device memory.
Show less
- Title
- Dissertation : novel parallel algorithms and performance optimization techniques for the multi-level fast multipole algorithm
- Creator
- Lingg, Michael
- Date
- 2020
- Collection
- Electronic Theses & Dissertations
- Description
-
Since Sir Issac Newton determined that characterizing orbits of celestial objects required considering the gravitational interactions among all bodies in the system, the N-Body problem has been a very important tool in physics simulations. Expanding on the early use of the classical N-Body problem for gravitational simulations, the method has proven invaluable in fluid dynamics, molecular simulations and data analytics. The extension of the classical N-Body problem to solve the Helmholtz...
Show moreSince Sir Issac Newton determined that characterizing orbits of celestial objects required considering the gravitational interactions among all bodies in the system, the N-Body problem has been a very important tool in physics simulations. Expanding on the early use of the classical N-Body problem for gravitational simulations, the method has proven invaluable in fluid dynamics, molecular simulations and data analytics. The extension of the classical N-Body problem to solve the Helmholtz equation for groups of particles with oscillatory interactions has allowed for simulations that assist in antenna design, radar cross section prediction, reduction of engine noise, and medical devices that utilize sound waves, to name a sample of possible applications. While N-Body simulations are extremely valuable, the computational cost of directly evaluating interactions among all pairs grows quadratically with the number of particles, rendering large scale simulations infeasible even on the most powerful supercomputers. The Fast Multipole Method (FMM) and the broader class of tree algorithms that it belongs to have significantly reduced the computational complexity of N-body simulations, while providing controllable accuracy guarantees. While FMM provided a significant boost, N-body problems tackled by scientists and engineers continue to grow larger in size, necessitating the development of efficient parallel algorithms and implementations to run on supercomputers. The Laplace variant of FMM, which is used to treat the classical N-body problem, has been extensively researched and optimized to the extent that Laplace FMM codes can scale to tens of thousands of processors for simulations involving over trillion particles. In contrast, the Multi-Level Fast Multipole Algorithm (MLFMA), which is aimed for the Helmholtz kernel variant of FMM, lags significantly behind in efficiency and scaling. The added complexity of an oscillatory potential results in much more intricate data dependency patterns and load balancing requirements among parallel processes, making algorithms and optimizations developed for Laplace FMM mostly ineffective for MLFMA. In this thesis, we propose novel parallel algorithms and performance optimization techniques to improve the performance of MLFMA on modern computer architectures. Proposed algorithms and performance optimizations range from efficient leveraging of the memory hierarchy on multi-core processors to an investigation of the benefits of the emerging concept of task parallelism for MLFMA, and to significant reductions of communication overheads and load imbalances in large scale computations. Parallel algorithms for distributed memory parallel MLFMA are also accompanied by detailed complexity analyses and performance models. We describe efficient implementations of all proposed algorithms and optimization techniques, and analyze their impact in detail. In particular, we show that our work yields significant speedups and much improved scalability compared to existing methods for MLFMA in large geometries designed to test the range of the problem space, as well as in real world problems.
Show less
- Title
- Discrete de Rham-Hodge Theory
- Creator
- Zhao, Rundong
- Date
- 2020
- Collection
- Electronic Theses & Dissertations
- Description
-
We present a systematic treatment to 3D shape analysis based on the well-established de Rham-Hodge theory in differential geometry and topology. The computational tools we developed are widely applicable to research areas such as computer graphics, computer vision, and computational biology. We extensively tested it in the context of 3D structure analysis of biological macromolecules to demonstrate the efficacy and efficiency of our method in potential applications. Our contributions are...
Show moreWe present a systematic treatment to 3D shape analysis based on the well-established de Rham-Hodge theory in differential geometry and topology. The computational tools we developed are widely applicable to research areas such as computer graphics, computer vision, and computational biology. We extensively tested it in the context of 3D structure analysis of biological macromolecules to demonstrate the efficacy and efficiency of our method in potential applications. Our contributions are summarized in the following aspects. First, we present a compendium of discrete Hodge decompositions of vector fields, which provides the primary building block of the de Rham-Hodge theory for computations performed on the commonly used tetrahedral meshes embedded in the 3D Euclidean space. Second, we present a real-world application of the above computational tool to 3D shape analysis on biological macromolecules. Finally, we extend the above method to an evolutionary de Rham-Hodge method to provide a unified paradigm for the multiscale geometric and topological analysis of evolving manifolds constructed from a filtration, which induces a family of evolutionary de Rham complexes. Our work on the decomposition of vector fields, spectral shape analysis on static shapes, and evolving shapes has already shown its effectiveness in biomolecular applications and will lead to a rich set of features for machine learning-based shape analysis currently under development.
Show less
- Title
- Deep Convolutional Networks for Modeling Geo-Spatio-Temporal Relationships and Extremes
- Creator
- Wilson, Tyler
- Date
- 2021
- Collection
- Electronic Theses & Dissertations
- Description
-
Geo-spatio-temporal data are valuable for a broad range of applications including traffic forecasting, weather prediction, detection of epidemic outbreaks, and crime monitoring. Data driven approaches to these problems must address several fundamental challenges such as handling the %The two we focus on are the importance ofgeo-spatio-temporal relationships and extreme events. Another recent technological shift has been the success of deep learning especially in applications such as computer...
Show moreGeo-spatio-temporal data are valuable for a broad range of applications including traffic forecasting, weather prediction, detection of epidemic outbreaks, and crime monitoring. Data driven approaches to these problems must address several fundamental challenges such as handling the %The two we focus on are the importance ofgeo-spatio-temporal relationships and extreme events. Another recent technological shift has been the success of deep learning especially in applications such as computer vision, speech recognition, and natural language processing. In this work, we argue that deep learning is a promising approach for many geo-spatio-temporal problems and highlight how it can be used to address the challenges of modeling geo-spatio-temporal relationships and extremes. Though previous research has established techniques for modeling spatio-temporal relationships, these approaches are often limited to gridded spatial data with fixed-length feature vectors and considered only spatial relationships among the features, while ignoring the relationships among model parameters.We begin by describing how the spatial and temporal relationships for non-gridded spatial data can be modeled simultaneously by coupling the graph convolutional network with a long short-term memory (LSTM) network. Unlike previous research, our framework treats the adjacency matrix associated with the spatial data as a model parameter that can be learned from data, with constraints on its sparsity and rank to reduce the number of estimated parameters.Further, we show that the learned adjacency matrix may reveal useful information about the dominant spatial relationships that exist within the data. Second, we explore the varieties of spatial relationships that may exist in a geo-spatial prediction task. Specifically, we distinguish between spatial relationships among predictors and the spatial relationships among model parameters at different locations. We demonstrate an approach for modeling spatial dependencies among model parameters using graph convolution and provide guidance on when convolution of each type can be effectively applied. We evaluate our proposed approach on a climate downscaling and weather prediction tasks. Next, we introduce DeepGPD, a novel deep learning framework for predicting the distribution of geo-spatio-temporal extreme events. We draw on research in extreme value theory and use the generalized Pareto distribution (GPD) to model the distribution of excesses over a threshold. The GPD is integrated into our deep learning framework to learn the distribution of future excess values while incorporating the geo-spatio-temporal relationships present in the data. This requires a novel reparameterization of the GPD to ensure that its constraints are satisfied by the outputs of the neural network. We demonstrate the effectiveness of our proposed approach on a real-world precipitation data set. DeepGPD also employs a deep set architecture to handle the variable-sized feature sets corresponding to excess values from previous time steps as its predictors. Finally, we extend the DeepGPD formulation to simultaneously predict the distribution of extreme events and accurately infer their point estimates. Doing so requires modeling the full distribution of the data not just its extreme values. We propose DEMM, a deep mixture model for modeling the distribution of both excess and non-excess values. To ensure the point estimation of DEMM is a feasible value, new constraints on the output of the neural network are introduced, which requires a new reparameterization of the model parameters of the GPD. We conclude by discussing possibilities for further research at the intersection of deep learning and geo-spatio-temporal data.
Show less
- Title
- Contributions to Fingerprint Recognition
- Creator
- Engelsma, Joshua James
- Date
- 2021
- Collection
- Electronic Theses & Dissertations
- Description
-
From the early days of the mid to late nineteenth century when scientific research first began to focus on fingerprints, to the present day fingerprint recognition systems we find deployed on our day to day devices, the science of fingerprint recognition has come a long way. In spite of this progress, there remains challenging problems to be solved. This thesis highlights a few of these problems, and proposes solutions to address them. One area of further research that must be conducted on...
Show moreFrom the early days of the mid to late nineteenth century when scientific research first began to focus on fingerprints, to the present day fingerprint recognition systems we find deployed on our day to day devices, the science of fingerprint recognition has come a long way. In spite of this progress, there remains challenging problems to be solved. This thesis highlights a few of these problems, and proposes solutions to address them. One area of further research that must be conducted on fingerprint recognition systems is that of robust, operational evaluations. In chapter two of this thesis, we show how the current practices of using calibration patterns to evaluate fingerprint readers are limited. We then propose a realistic fake finger called the Universal Target. The Universal Target is a realistic, 3D, fake finger (or phantom) which can be imaged by all major types of fingerprint sensing technologies. We show the entire manufacturing (molding and casting) process for fabricating the Universal Targets. Then, we show a series of evaluations which demonstrate how the Universal Targets can be used to operationally evaluate current commercial fingerprint readers. Our Universal Target is a significant step forward in enabling more realistic, standardized evaluations of fingerprint readers. In our third chapter, we shift gears from improving the evaluation standards of fingerprint readers to instead focus on the security of fingerprint readers. In particular, we turn our attention towards detecting fake fingerprint (spoof) attacks. To do so, we open source a fingerprint reader (built from low-cost ubiquitous components), called RaspiReader. RaspiReader is a high-resolution fingerprint reader customized with both direct-view imaging and FTIR imaging in order to better detect fingerprint spoofs. We show through a number of experiments that RaspiReader enables state-of-the-art fingerprint spoof detection accuracy. We also demonstrate that RaspiReader enables better generalization to what are known as "unseen attacks" (those attacks which were not seen during training of the spoof detector). Finally, we show that fingerprints captured by RaspiReader are completely compatible with images captured by legacy fingerprint readers for matching.In chapter four, we move on to propose a major improvement to the fingerprint feature extraction and matching sub-modules of fingerprint recognition systems. In particular, we propose a deep network, called DeepPrint, to extract a 200 byte fixed-length fingerprint representation. While prevailing fingerprint matchers primarily utilize minutiae points and expensive graph matching algorithms for comparison, two DeepPrint representations can be compared with only 192 multiplications and 191 additions. This is extremely useful for large scale search where potentially billions of pairwise fingerprint comparisons must be made. The DeepPrint representation also enables practical encrypted matching using a fully homomorphic encryption scheme. This enables better protection of the fingerprint templates which are stored in the database. While discriminative fixed-length representations are available for both face and iris recognition, such a representation has eluded fingerprint recognition. This chapter aims to fill that void.Finally, we conclude our thesis by working to extend fingerprint recognition to all ages. While current fingerprint recognition systems are being used by billions of teenagers and adults around the world, the youngest people among us remain disenfranchised. In particular, modern day fingerprint recognition systems do not work well on infants and young children. In this penultimate chapter, we aim to rectify this major shortcoming. To that end, we prototype a high-resolution (1900 ppi) infant fingerprint reader. Then, we track and fingerprint 315 infants (under the age of 3 months at enrollment) at the Dayalbagh Children's Hospital in Agra India over the course of 1 year (4 different sessions). To match the infant fingerprints, we develop our own high-resolution infant fingerprint matcher. Our experimental results demonstrate significant promise for the extension of fingerprint recognition to all ages. This work has the potential for major global good as all young infants and children could be given a verifiable digital identity for better vaccination tracking as a child and for government benefits and assistance as an adult. In summary, this thesis makes major contributions to the entire end-to-end fingerprint recognition system and extends its use case to all ages.
Show less
- Title
- Computational Frameworks for Indel-Aware Evolutionary Analysis using Large-Scale Genomic Sequence Data
- Creator
- Wang, Wei
- Date
- 2021
- Collection
- Electronic Theses & Dissertations
- Description
-
With the development of sequencing techniques, genetic sequencing data has been extensively used in evolutionary studies.The phylogenetic reconstruction problem, which is the reconstruction of evolutionary history from biomolecular sequences, is a fundamental problem. The evolutionary relationship between organisms is often represented by phylogeny, which is a tree or network representation. The most widely-used approach for reconstructing phylogenies from sequencing data involves two phases:...
Show moreWith the development of sequencing techniques, genetic sequencing data has been extensively used in evolutionary studies.The phylogenetic reconstruction problem, which is the reconstruction of evolutionary history from biomolecular sequences, is a fundamental problem. The evolutionary relationship between organisms is often represented by phylogeny, which is a tree or network representation. The most widely-used approach for reconstructing phylogenies from sequencing data involves two phases: multiple sequence alignment and phylogenetic reconstruction from the aligned sequences. As the amount of biomolecular sequence data increases, it has become a major challenge to develop efficient and accurate computational methods for phylogenetic analyses of large-scale sequencing data. Due to the complexity of the phylogenetic reconstruction problem in modern phylogenetic studies, the traditional sequence-based phylogenetic analysis methods involve many over-simplified assumptions. In this thesis, we describe our contribution in relaxing some of these over-simplified assumptions in the phylogenetic analysis.Insertion and deletion events, referred to as indels, carry much phylogenetic information but are often ignored in the reconstruction process of phylogenies. We take into account the indel uncertainties in multiple phylogenetic analyses by applying resampling and re-estimation. Another over-simplified assumption that we contributed to is adopted by many commonly used non-parametric algorithms for the resampling of biomolecular sequences, all sites in an MSA are evolved independently and identically distributed (i.i.d). Many evolution events, such as recombination and hybridization, may produce intra-sequence and functional dependence in biomolecular sequences that violate this assumption. We introduce SERES, a resampling algorithm for biomolecular sequences that can produce resampled replicates that preserve the intra-sequence dependence. We describe the application of the SERES resampling and re-estimation approach to two classical problems: the multiple sequence alignment support estimation and recombination-aware local genealogical inference. We show that these two statistical inference problems greatly benefit from the indel-aware resampling and re-estimation approach and the reservation of intra-sequence dependence.A major drawback of SERES is that it requires parameters to ensure the synchronization of random walks on unaligned sequences.We introduce RAWR, a non-parametric resampling method designed for phylogenetic tree support estimation that does not require extra parameters. We show that the RAWR-based resampling and re-estimation method produces comparable or typically better performance than the traditional bootstrap approach on the phylogenetic tree support estimation problem. We further relax the commonly used assumption of phylogeny.Evolutionary history is usually considered as a tree structure. Evolutionary events that cause reticulated gene flow are ignored. Previous studies show that alignment uncertainty greatly impacts downstream tree inference and learning. However, there is little discussion about the impact of MSA uncertainties on the phylogenetic network reconstruction. We show evidence that the errors introduced in MSA estimation decrease the accuracy of the inferred phylogenetic network, and an indel-aware reconstruction method is needed for phylogenetic network analysis. In this dissertation, we introduce our contribution to phylogenetic estimation using biomolecular sequence data involving complex evolutionary histories, such as sequence insertion and deletion processes and non-tree-like evolution.
Show less
- Title
- CS1 AND GENDER : UNDERSTANDING EFFECTS OF BACKGROUND AND SELF- EFFICACY ON ACHIEVEMENT AND INTEREST
- Creator
- Sands, Philip
- Date
- 2021
- Collection
- Electronic Theses & Dissertations
- Description
-
Over the past 20 years, the field of computer science has experienced a growth in studentinterest. Despite this increase in participation rates, longstanding gender gaps persist in computer science. Recent research has examined a wide variety of individual factors (e.g., self-efficacy, sense of belonging, etc.) that impact student interest and achievement in computer science; however, these factors are rarely considered in the context of existing learning theories. In this correlational study...
Show moreOver the past 20 years, the field of computer science has experienced a growth in studentinterest. Despite this increase in participation rates, longstanding gender gaps persist in computer science. Recent research has examined a wide variety of individual factors (e.g., self-efficacy, sense of belonging, etc.) that impact student interest and achievement in computer science; however, these factors are rarely considered in the context of existing learning theories. In this correlational study, I explored the relationship between prior knowledge of computer programming, self-efficacy, and the sources of self-efficacy as they differed by gender in a theoretical model of achievement and interest for students in first-year computer science (CS1) courses. This model was based on prior work from Bandura (1997) and others exploring self- efficacy and social cognitive theory in the context of mathematics and science fields. Using cross-sectional data from N=182 CS1 students at two universities, structural regressions were conducted between factors impacting CS1 students across the entire population and for men (N=108) and women (N=70) individually. This data was then used to address the following research questions. (1A) How do prior knowledge of computer programming, the sources of self- efficacy, and self-efficacy for computing predict CS1 achievement and student intentions to continue study in CS? (1B) How does self-efficacy mediate the relationship between student prior knowledge of computer programming and achievement in CS1? (1C) How are thoserelationships moderated by gender? (2) How does feedback in the form of student grades impact intention to continue in CS when considering gender as a moderating factor? For all students, student self-efficacy for CS positively impacted CS1 achievement and post-CS1 interest. Aligning with past research, self-efficacy was derived largely from mastery experiences, with vicarious experiences and social persuasions also contributing to a moderate degree. Social persuasions had a negative effect on self-efficacy, which diverged from research in other fields. The relationship between prior knowledge of computer programming and CS1 achievement was not mediated by self-efficacy and had a small positive effect. For women, vicarious experiences played a stronger role in defining student self-efficacy in CS. Additionally, while the importance of self-efficacy on achievement was similar to that for men, self-efficacy and achievement both played a much stronger role in determining student interest in CS for women. All these findings are in need of further exploration as the analysis was underpowered due to a small, COVID-19 impacted sample size. Future work should focus on the role of feedback on student self-efficacy, the potential misalignment of CS1 feedback and social network feedback, and interventions that address student beliefs about CS abilities to increase opportunities for authentic mastery and vicarious experiences.
Show less
- Title
- Automated Speaker Recognition in Non-ideal Audio Signals Using Deep Neural Networks
- Creator
- Chowdhury, Anurag
- Date
- 2021
- Collection
- Electronic Theses & Dissertations
- Description
-
Speaker recognition entails the use of the human voice as a biometric modality for recognizing individuals. While speaker recognition systems are gaining popularity in consumer applications, most of these systems are negatively affected by non-ideal audio conditions, such as audio degradations, multi-lingual speech, and varying duration audio. This thesis focuses on developing speaker recognition systems robust to non-ideal audio conditions.Firstly, a 1-Dimensional Convolutional Neural...
Show moreSpeaker recognition entails the use of the human voice as a biometric modality for recognizing individuals. While speaker recognition systems are gaining popularity in consumer applications, most of these systems are negatively affected by non-ideal audio conditions, such as audio degradations, multi-lingual speech, and varying duration audio. This thesis focuses on developing speaker recognition systems robust to non-ideal audio conditions.Firstly, a 1-Dimensional Convolutional Neural Network (1D-CNN) is developed to extract noise-robust speaker-dependent speech characteristics from the Mel Frequency Cepstral Coefficients (MFCC). Secondly, the 1D-CNN-based approach is extended to develop a triplet-learning-based feature-fusion framework, called 1D-Triplet-CNN, for improving speaker recognition performance by judiciously combining MFCC and Linear Predictive Coding (LPC) features. Our hypothesis rests on the observation that MFCC and LPC capture two distinct aspects of speech: speech perception and speech production. Thirdly, a time-domain filterbank called DeepVOX is learned from vast amounts of raw speech audio to replace commonly-used hand-crafted filterbanks, such as the Mel filterbank, in speech feature extractors. Finally, a vocal style encoding network called DeepTalk is developed to learn speaker-dependent behavioral voice characteristics to improve speaker recognition performance. The primary contribution of the thesis is the development of deep learning-based techniques to extract discriminative, noise-robust physical and behavioral voice characteristics from non-ideal speech audio. A large number of experiments conducted on the TIMIT, NTIMIT, SITW, NIST SRE (2008, 2010, and 2018), Fisher, VOXCeleb, and JukeBox datasets convey the efficacy of the proposed techniques and their importance in improving speaker recognition performance in non-ideal audio conditions.
Show less
- Title
- Applying evolutionary computation techniques to address environmental uncertainty in dynamically adaptive systems
- Creator
- Ramirez, Andres J.
- Date
- 2013
- Collection
- Electronic Theses & Dissertations
- Description
-
A dynamically adaptive system (DAS) observes itself and its execution environment at run time to detect conditions that warrant adaptation. If an adaptation is necessary, then a DAS changes its structure and/or behavior to continuously satisfy its requirements, even as its environment changes. It is challenging, however, to systematically and rigorously develop a DAS due to environmental uncertainty. In particular, it is often infeasible for a human to identify all possible combinations of...
Show moreA dynamically adaptive system (DAS) observes itself and its execution environment at run time to detect conditions that warrant adaptation. If an adaptation is necessary, then a DAS changes its structure and/or behavior to continuously satisfy its requirements, even as its environment changes. It is challenging, however, to systematically and rigorously develop a DAS due to environmental uncertainty. In particular, it is often infeasible for a human to identify all possible combinations of system and environmental conditions that a DAS might encounter throughout its lifetime. Nevertheless, a DAS must continuously satisfy its requirements despite the threat that this uncertainty poses to its adaptation capabilities. This dissertation proposes a model-based framework that supports the specification, monitoring, and dynamic reconfiguration of a DAS to explicitly address uncertainty. The proposed framework uses goal-oriented requirements models and evolutionary computation techniques to derive and fine-tune utility functions for requirements monitoring in a DAS, identify combinations of system and environmental conditions that adversely affect the behavior of a DAS, and generate adaptations on-demand to transition the DAS to a target system configuration while preserving system consistency. We demonstrate the capabilities of our model-based framework by applying it to an industrial case study involving a remote data mirroring network that efficiently distributes data even as network links fail and messages are dropped, corrupted, and delayed.
Show less
- Title
- Algorithms for deep packet inspection
- Creator
- Patel, Jignesh
- Date
- 2012
- Collection
- Electronic Theses & Dissertations
- Description
-
The core operation in network intrusion detection and prevention systems is Deep Packet Inspection (DPI), in which each security threat is represented as a signature, and the payload of each data packet is matched against the set of current security threat signatures. DPI is also used for other networking applications like advanced QoS mechanisms, protocol identification etc.. In the past, attack signatures were specified as strings, and a great deal of research has been done in string...
Show moreThe core operation in network intrusion detection and prevention systems is Deep Packet Inspection (DPI), in which each security threat is represented as a signature, and the payload of each data packet is matched against the set of current security threat signatures. DPI is also used for other networking applications like advanced QoS mechanisms, protocol identification etc.. In the past, attack signatures were specified as strings, and a great deal of research has been done in string matching for network applications. Today most DPI systems use Regular Expression (RE) to represent signatures. RE matching is more diffcult than string matching, and current string matching solutions don't work well for REs. RE matching for networking applications is diffcult for several reasons. First, the DPI application is usually implemented in network devices, which have limited computing resources. Second, as new threats are discovered, size of the signature set grows over time. Last, the matching needs to be done at network speeds, the growth of which out paces improvements in computing speed; so there is a need for novel solutions that can deliver higher throughput. So RE matching for DPI is a very important and active research area.In our research, we investigate the existing methods proposed for RE matching, identify their limitations, and propose new methods to overcome these limitations. RE matching remains a fundamentally challenging problem due to the diffculty in compactly encoding DFA. While the DFA for any one RE is typically small, the DFA that corresponds to the entire set of REs is usually too large to be constructed or deployed. To address this issue, many alternative automata implementations that compress the size of the final automaton have been proposed. However, previously proposed automata construction algorithms employ a “Union then Minimize” framework where the automata for each RE are first joined before minimization occurs. This leads to expensive minimization on a large automata, and a large intermediate memory footprint. We propose a “Minimize then Union” framework for constructing compact alternative automata, which minimizes smaller automata first before combining them. This approach required much less time and memory, allowing us to handle a much larger RE set. Prior hardware based RE matching algorithms typically use FPGA. The drawback of FPGA is that resynthesizing and updating FPGA circuitry to handle RE updates is slow and diffcult. We propose the first hardware-based RE matching approach that uses Ternary Content Addressable Memory (TCAM). TCAMs have already been widely used in modern networking devices for tasks such as packet classification, so our solutions can be easily deployed. Our methods support easy RE updates, and we show that we can achieve very high throughput. The main reason combined DFAs for multiple REs grow exponentially in size is because of replication of states. We developed a new overlay automata model which exploit this replication to compress the size of the DFA. The idea is to group together the replicated DFA structures instead of repeating them multiple times. The result is that we get a final automata size that is close to that of a NFA (which is linear in the size of the RE set), and simultaneously achieve fast deterministic matching speed of a DFA.
Show less