You are here
Search results
(21 - 40 of 70)
Pages
- Title
- The Evolutionary Origins of Cognition : Understanding the early evolution of biological control systems and general intelligence
- Creator
- Carvalho Pontes, Anselmo
- Date
- 2021
- Collection
- Electronic Theses & Dissertations
- Description
-
In the last century, we have made great strides towards understanding natural cognition and recreating it artificially. However, most cognitive research is still guided by an inadequate theoretical framework that equates cognition to a computer system executing a data processing task. Cognition, whether natural or artificial, is not a data processing system; it is a control system.At cognition's core is a value system that allows it to evaluate current conditions and decide among two or more...
Show moreIn the last century, we have made great strides towards understanding natural cognition and recreating it artificially. However, most cognitive research is still guided by an inadequate theoretical framework that equates cognition to a computer system executing a data processing task. Cognition, whether natural or artificial, is not a data processing system; it is a control system.At cognition's core is a value system that allows it to evaluate current conditions and decide among two or more courses of action. Memory, learning, planning, and deliberation, rather than being essential cognitive abilities, are features that evolved over time to support the primary task of deciding “what to do next”. I used digital evolution to recreate the early stages in the evolution of natural cognition, including the ability to learn. Interestingly, I found cognition evolves in a predictable manner, with more complex abilities evolving in stages, by building upon previous simpler ones. I initially investigated the evolution of dynamic foraging behaviors among the first animals known to have a central nervous system, Ediacaran microbial mat miners. I then followed this up by evolving more complex forms of learning. I soon encountered practical limitations of the current methods, including exponential demand of computational resources and genetic representations that were not conducive to further scaling. This type of complexity barrier has been a recurrent issue in digital evolution. Nature, however, is not limited in the same ways; through evolution, it has created a language to express robust, modular, and flexible control systems of arbitrary complexity and apparently open-ended evolvability. The essential features of this language can be captured in a digital evolution platform. As an early demonstration of this, I evolved biologically plausible regulatory systems for virtual cyanobacteria. These systems regulate the cells' growth, photosynthesis and replication given the daily light cycle, the cell's energy reserves, and levels of stress. Although simple, this experimental system displays dynamics and decision-making mechanisms akin to biology, with promising potential for open-ended evolution of cognition towards general intelligence.
Show less
- Title
- Towards Accurate Ranging and Versatile Authentication for Smart Mobile Devices
- Creator
- Li, Lingkun
- Date
- 2022
- Collection
- Electronic Theses & Dissertations
- Description
-
Internet of Things (IoTs) was rapidly developed during past years. Smart devices, such as smartphones, smartwatches, and smart assistants, which are equipped with smart chips as well as sensors, provide users with many easy used functions and lead them to a more convenient life. In this dissertation, we carefully studied the birefringence of the transparent tape, the nonlinear effects of the microphone, and the phase characteristic of the reflected ultrasound, and make use of such effects to...
Show moreInternet of Things (IoTs) was rapidly developed during past years. Smart devices, such as smartphones, smartwatches, and smart assistants, which are equipped with smart chips as well as sensors, provide users with many easy used functions and lead them to a more convenient life. In this dissertation, we carefully studied the birefringence of the transparent tape, the nonlinear effects of the microphone, and the phase characteristic of the reflected ultrasound, and make use of such effects to design three systems, RainbowLight, Patronus, and BreathPass, to provide users with accurate localization, privacy protection, and authentication, respectively.RainbowLight leverages observation direction-varied spectrum generated by a polarized light passing through a birefringence material, i.e., transparent tape, to provide localization service. We characterize the relationship between observe direction, light interference and the special spectrum, and using it to calculate the direction to a chip after taking a photo containing the chip. With multiple chips, RainbowLight designs a direction intersection based method to derive the location. In this dissertation, we build the theoretical basis of using polarized light and birefringence phenomenon to perform localization. Based on the theoretical model, we design and implement the RainbowLight on the mobile device, and evaluate the performance of the system. The evaluation results show that RainbowLight achieves 1.68 cm of the median error in the X-axis, 2 cm of the median error in the Y-axis, 5.74 cm of the median error in Z-axis, and 7.04 cm of the median error with the whole dimension.It is the first system that could only use the reflected lights in the space to perform visible light positioning. Patronus prevents unauthorized speech recording by leveraging the nonlinear effects of commercial off-the-shelf microphones. The inaudible ultrasound scramble interferes recording of unauthorized devices and can be canceled on authorized devices through an adaptive filter. In this dissertation, we carefully studied the nonlinear effects of ultrasound on commercial microphones. Based on the study, we proposed an optimized configuration to generate the scramble. It would provide privacy protection againist unauthorized recordings that does not disturb normal conversations. We designed, implemented a system including hardware and software components. Experiments results show that only 19.7% of words protected by Patronus' scramble can be recognized by unauthorized devices. Furthermore, authorized recordings have 1.6x higher perceptual evaluation of speech quality (PESQ) score and, on average, 50% lower speech recognition error rates than unauthorized recordings. BreathPass uses speakers to emit ultrasound signals. The signals are reflected off the chest wall and abdomen and then back to the microphone, which records the reflected signals. The system then extracts the fingerprints from the breathing pattern, and use these fingerprints to perform authentication. In this dissertation, we characterized the challenge of conducting authentication with the breathing pattern. After addressing these challenges, we designed such a system and implemented a proof-of-concept application on Android platform.We also conducted comprehensive experiments to evaluate the performance under different scenarios. BreathPass achieves an overall accuracy of 83%, a true positive rate of 73%, and a false positive rate of 5%, according to performance evaluation results. In general, this dissertation provides an enhanced ranging and versatile authentication systems of Internet of Things.
Show less
- Title
- Investigating the Role of Sensor Based Technologies to Support Domestic Activities in Sub-Saharan Africa
- Creator
- Chidziwisano, George Hope
- Date
- 2022
- Collection
- Electronic Theses & Dissertations
- Description
-
In sub-Saharan Africa (SSA), homes face various challenges including insecurity, unreliable power supply, and extreme weather conditions. While the use of sensor-based technologies is increasing in industrialized countries, it is unclear how they can be used to support domestic activities in SSA. The availability of low-cost sensors and the widespread adoption of mobile phones presents an opportunity to collect real-time data and utilize proactive methods to monitor these challenges. This...
Show moreIn sub-Saharan Africa (SSA), homes face various challenges including insecurity, unreliable power supply, and extreme weather conditions. While the use of sensor-based technologies is increasing in industrialized countries, it is unclear how they can be used to support domestic activities in SSA. The availability of low-cost sensors and the widespread adoption of mobile phones presents an opportunity to collect real-time data and utilize proactive methods to monitor these challenges. This dissertation presents three studies that build upon each other to explore the role of sensor-based technologies in SSA. I used a technology probes method to develop three sensor-based systems that support domestic security (M-Kulinda), power blackout monitoring (GridAlert) and poultry farming (NkhukuApp). I deployed M-Kulinda in 20 Kenyan homes, GridAlert in 18 Kenyan homes, and NkhukuProbe in 15 Malawian home-based chicken coops for one month. I used interview, observation, diary, and data logging methods to understand participants’ experiences using the probes. Findings from these studies suggest that people in Kenya and Malawi want to incorporate sensor-based technologies into their everyday activities, and they quickly find unexpected ways to use them. Participants’ interactions with the probes prompted detailed reflections about how they would integrate sensor-based technologies in their homes (e.g., monitoring non-digital tools). These reflections are useful for motivating new design concepts in HCI. I use these findings to motivate a discussion about unexplored areas that could benefit from sensor-based technologies. Further, I discuss recommendations for designing sensor-based technologies that support activities in some Kenyan and Malawian homes. This research contributes to HCI by providing design implications for sensor-based applications in Kenyan and Malawian homes, employing a technology probes method in a non-traditional context, and developing prototypes of three novel systems.
Show less
- Title
- Discrete de Rham-Hodge Theory
- Creator
- Zhao, Rundong
- Date
- 2020
- Collection
- Electronic Theses & Dissertations
- Description
-
We present a systematic treatment to 3D shape analysis based on the well-established de Rham-Hodge theory in differential geometry and topology. The computational tools we developed are widely applicable to research areas such as computer graphics, computer vision, and computational biology. We extensively tested it in the context of 3D structure analysis of biological macromolecules to demonstrate the efficacy and efficiency of our method in potential applications. Our contributions are...
Show moreWe present a systematic treatment to 3D shape analysis based on the well-established de Rham-Hodge theory in differential geometry and topology. The computational tools we developed are widely applicable to research areas such as computer graphics, computer vision, and computational biology. We extensively tested it in the context of 3D structure analysis of biological macromolecules to demonstrate the efficacy and efficiency of our method in potential applications. Our contributions are summarized in the following aspects. First, we present a compendium of discrete Hodge decompositions of vector fields, which provides the primary building block of the de Rham-Hodge theory for computations performed on the commonly used tetrahedral meshes embedded in the 3D Euclidean space. Second, we present a real-world application of the above computational tool to 3D shape analysis on biological macromolecules. Finally, we extend the above method to an evolutionary de Rham-Hodge method to provide a unified paradigm for the multiscale geometric and topological analysis of evolving manifolds constructed from a filtration, which induces a family of evolutionary de Rham complexes. Our work on the decomposition of vector fields, spectral shape analysis on static shapes, and evolving shapes has already shown its effectiveness in biomolecular applications and will lead to a rich set of features for machine learning-based shape analysis currently under development.
Show less
- Title
- SIGN LANGUAGE RECOGNIZER FRAMEWORK BASED ON DEEP LEARNING ALGORITHMS
- Creator
- Akandeh, Atra
- Date
- 2021
- Collection
- Electronic Theses & Dissertations
- Description
-
According to the World Health Organization (WHO, 2017), 5% of the world’s population have hearing loss. Most people with hearing disabilities communicate via sign language, which hearing people find extremely difficult to understand. To facilitate communication of deaf and hard of hearing people, developing an efficient communication system is a necessity. There are many challenges associated with the Sign Language Recognition (SLR) task, namely, lighting conditions, complex background,...
Show moreAccording to the World Health Organization (WHO, 2017), 5% of the world’s population have hearing loss. Most people with hearing disabilities communicate via sign language, which hearing people find extremely difficult to understand. To facilitate communication of deaf and hard of hearing people, developing an efficient communication system is a necessity. There are many challenges associated with the Sign Language Recognition (SLR) task, namely, lighting conditions, complex background, signee body postures, camera position, occlusion, complexity and large variations in hand posture, no word alignment, coarticulation, etc.Sign Language Recognition has been an active domain of research since the early 90s. However, due to computational resources and sensing technology constraints, limited advancement has been achieved over the years. Existing sign language translation systems mostly can translate a single sign at a time, which makes them less effective in daily-life interaction. This work develops a novel sign language recognition framework using deep neural networks, which directly maps videos of sign language sentences to sequences of gloss labels by emphasizing critical characteristics of the signs and injecting domain-specific expert knowledge into the system. The proposed model also allows for combining data from variant sources and hence combating limited data resources in the SLR field.
Show less
- Title
- Network analysis with negative links
- Creator
- Derr, Tyler Scott
- Date
- 2020
- Collection
- Electronic Theses & Dissertations
- Description
-
As we rapidly continue into the information age, the rate at which data is produced has created an unprecedented demand for novel methods to effectively extract insightful patterns. We can then seek to understand the past, make predictions about the future, and ultimately take actionable steps towards improving our society. Thus, due to the fact that much of today's big data can be represented as graphs, emphasis is being taken to harness the natural structure of data through network analysis...
Show moreAs we rapidly continue into the information age, the rate at which data is produced has created an unprecedented demand for novel methods to effectively extract insightful patterns. We can then seek to understand the past, make predictions about the future, and ultimately take actionable steps towards improving our society. Thus, due to the fact that much of today's big data can be represented as graphs, emphasis is being taken to harness the natural structure of data through network analysis. Traditionally, network analysis has focused on networks having only positive links, or unsigned networks. However, in many real-world systems, relations between nodes in a graph can be both positive and negative, or signed networks. For example, in online social media, users not only have positive links such as friends, followers, and those they trust, but also can establish negative links to those they distrust, towards their foes, or block and unfriend users.Thus, although signed networks are ubiquitous due to their ability to represent negative links in addition to positive links, they have been significantly under explored. In addition, due to the rise in popularity of today's social media and increased polarization online, this has led to both an increased attention and demand for advanced methods to perform the typical network analysis tasks when also taking into consideration negative links. More specifically, there is a need for methods that can measure, model, mine, and apply signed networks that harness both these positive and negative relations. However, this raises novel challenges, as the properties and principles of negative links are not necessarily the same as positive links, and furthermore the social theories that have been used in unsigned networks might not apply with the inclusion of negative links.The chief objective of this dissertation is to first analyze the distinct properties negative links have as compared to positive links and towards improving network analysis with negative links by researching the utility and how to harness social theories that have been established in a holistic view of networks containing both positive and negative links. We discover that simply extending unsigned network analysis is typically not sufficient and that although the existence of negative links introduces numerous challenges, they also provide unprecedented opportunities for advancing the frontier of the network analysis domain. In particular, we develop advanced methods in signed networks for measuring node relevance and centrality (i.e., signed network measuring), present the first generative signed network model and extend/analyze balance theory to signed bipartite networks (i.e., signed network modeling), construct the first signed graph convolutional network which learns node representations that can achieve state-of-the-art prediction performance and then furthermore introduce the novel idea of transformation-based network embedding (i.e., signed network mining), and apply signed networks by creating a framework that can infer both link and interaction polarity levels in online social media and constructing an advanced comprehensive congressional vote prediction framework built around harnessing signed networks.
Show less
- Title
- Quantitative methods for calibrated spatial measurements of laryngeal phonatory mechanisms
- Creator
- Ghasemzadeh, Hamzeh
- Date
- 2020
- Collection
- Electronic Theses & Dissertations
- Description
-
The ability to perform measurements is an important cornerstone and the prerequisite of any quantitative research. Measurements allow us to quantify inputs and outputs of a system, and then to express their relationships using concise mathematical expressions and models. Those models would then enable us to understand how a target system works and to predict its output for changes in the system parameters. Conversely, models would enable us to determine the proper parameters of a system for...
Show moreThe ability to perform measurements is an important cornerstone and the prerequisite of any quantitative research. Measurements allow us to quantify inputs and outputs of a system, and then to express their relationships using concise mathematical expressions and models. Those models would then enable us to understand how a target system works and to predict its output for changes in the system parameters. Conversely, models would enable us to determine the proper parameters of a system for achieving a certain output. Putting these in the context of voice science research, variations in the parameters of the phonatory system could be attributed to individual differences. Thus, accurate models would enable us to account for individual differences during the diagnosis and to make reliable predictions about the likely outcome of different treatment options. Analysis of vibration of the vocal folds using high-speed videoendoscopy (HSV) could be an ideal candidate for constructing computational models. However, conventional images are not spatially calibrated and cannot be used for absolute spatial measurements. This dissertation is focused on developing the required methodologies for calibrated spatial measurements from in-vivo HSV recordings. Specifically, two different approaches for calibrated horizontal measurements of HSV images are presented. The first approach is called the indirect approach, and it is based on the registration of a specific attribute of a common object (e.g. size of a lesion) from a calibrated intraoperative still image to its corresponding non-calibrated in-vivo HSV recording. This approach does not require specialized instruments and can be implemented in many clinical settings. However, its validity depends on a couple of assumptions. Violation of those assumptions could lead to significant measurement errors. The second approach is called the direct approach, and it is based on a laser-projection flexible fiberoptic endoscope. This approach would enable us to make accurate calibrated spatial measurements. This dissertation evaluates the accuracy of the first approach indirectly, and by studying its underlying fundamental assumptions. However, the accuracy of the second approach is evaluated directly, and using benchtop experiments with different surfaces, different working distances, and different imaging angles. The main significances and contributions of this dissertation are the following: (1) a formal treatment of indirect horizontal calibration is presented, and the assumptions governing its validity and reliability are discussed. A battery of tests is presented that can indirectly assess the validity of those assumptions in laryngeal imaging applications; (2) recordings from pre- and post-surgery from patients with vocal fold mass lesions are used as a testbench for the developed indirect calibration approach. In that regard, a full solution is developed for measuring the calibrated velocity of the vocal folds. The developed solution is then used to investigate post-surgery changes in the closing velocity of the vocal folds from patients with vocal fold mass lesions; (3) the method for calibrated vertical measurement from a laser-projection fiberoptic flexible endoscope is developed. The developed method is evaluated at different working distances, different imaging angles, and on a 3D surface; (4) a detailed analysis and investigation of non-linear image distortion of a fiberoptic flexible endoscope is presented. The effect of imaging angle and spatial location of an object on the magnitude of that distortion is studied and quantified; (5) the method for calibrated horizontal measurement from a laser-projection fiberoptic flexible endoscope is developed. The developed method is evaluated at different working distances, different imaging angles, and on a 3D surface.
Show less
- Title
- DIGITAL IMAGE FORENSICS IN THE CONTEXT OF BIOMETRICS
- Creator
- Banerjee, Sudipta
- Date
- 2020
- Collection
- Electronic Theses & Dissertations
- Description
-
Digital image forensics entails the deduction of the origin, history and authenticity of a digital image. While a number of powerful techniques have been developed for this purpose, much of the focus has been on images depicting natural scenes and generic objects. In this thesis, we direct our focus on biometric images, viz., iris, ocular and face images.Firstly, we assess the viability of using existing sensor identification schemes developed for visible spectrum images on near-infrared (NIR...
Show moreDigital image forensics entails the deduction of the origin, history and authenticity of a digital image. While a number of powerful techniques have been developed for this purpose, much of the focus has been on images depicting natural scenes and generic objects. In this thesis, we direct our focus on biometric images, viz., iris, ocular and face images.Firstly, we assess the viability of using existing sensor identification schemes developed for visible spectrum images on near-infrared (NIR) iris and ocular images. These schemes are based on estimating the multiplicative sensor noise that is embedded in an input image. Further, we conduct a study analyzing the impact of photometric modifications on the robustness of the schemes. Secondly, we develop a method for sensor de-identificaton, where the sensor noise in an image is suppressed but its biometric utility is retained. This enhances privacy by unlinking an image from its camera sensor and, subsequently, the owner of the camera. Thirdly, we develop methods for constructing an image phylogeny tree from a set of near-duplicate images. An image phylogeny tree captures the relationship between subtly modified images by computing a directed acyclic graph that depicts the sequence in which the images were modified. Our primary contribution in this regard is the use of complex basis functions to model any arbitrary transformation between a pair of images and the design of a likelihood ratio based framework for determining the original and modified image in the pair. We are currently integrating a graph-based deep learning approach with sensor-specific information to refine and improve the performance of the proposed image phylogeny algorithm.
Show less
- Title
- I AM DOING MORE THAN CODING : A QUALITATIVE STUDY OF BLACK WOMEN HBCU UNDERGRADUATES’ PERSISTENCE IN COMPUTING
- Creator
- Benton, Amber V.
- Date
- 2020
- Collection
- Electronic Theses & Dissertations
- Description
-
The purpose of my study is to explore why and how Black women undergraduates at historically Black colleges and universities (HBCUs) persist in computing. By centering the experiences of Black women undergraduates and their stories, this dissertation expands traditional, dominant ways of understanding student persistence in higher education. Critical Race Feminism (CRF) was applied as a conceptual framework to the stories of 11 Black women undergraduates in computing and drew on the small...
Show moreThe purpose of my study is to explore why and how Black women undergraduates at historically Black colleges and universities (HBCUs) persist in computing. By centering the experiences of Black women undergraduates and their stories, this dissertation expands traditional, dominant ways of understanding student persistence in higher education. Critical Race Feminism (CRF) was applied as a conceptual framework to the stories of 11 Black women undergraduates in computing and drew on the small stories qualitative approach to examine the day-to-day experiences of Black women undergraduates at HBCUs as they persisted in their computing degree programs. The findings suggest that: (a) gender underrepresentation in computing affects Black women’s experiences, (b) computing culture at HBCUs directly affect Black women in computing, (c) Black women need access to resources and opportunities to persist in computing, (d) computing-related internships are beneficial professional opportunities but are also sites of gendered racism for Black women, (e) connectedness between Black people is innate but also needs to be fostered, (f) Black women want to engage in computing that contributes to social impact and community uplift, and (g) science identity is not a primary identity for Black women in computing. This paper also argues that disciplinary focused efforts contribute to the persistence of Black women in computing.
Show less
- Title
- Fast edit distance calculation methods for NGS sequence similarity
- Creator
- Islam, A. K. M. Tauhidul
- Date
- 2020
- Collection
- Electronic Theses & Dissertations
- Description
-
Sequence fragments generated from targeted regions of phylogenetic marker genes provide valuable insight in identifying and classifying organisms and inferring taxonomic hierarchies. In recent years, significant development in targeted gene fragment sequencing through Next Generation Sequencing (NGS) technologies has increased the necessity of efficient sequence similarity computation methods for very large numbers of pairs of NGS sequences.The edit distance has been widely used to determine...
Show moreSequence fragments generated from targeted regions of phylogenetic marker genes provide valuable insight in identifying and classifying organisms and inferring taxonomic hierarchies. In recent years, significant development in targeted gene fragment sequencing through Next Generation Sequencing (NGS) technologies has increased the necessity of efficient sequence similarity computation methods for very large numbers of pairs of NGS sequences.The edit distance has been widely used to determine the dissimilarity between pairs of strings. All the known methods for the edit distance calculation run in near quadratic time with respect to string lengths, and it may take days or weeks to compute distances between such large numbers of pairs of NGS sequences. To solve the performance bottleneck problem, faster edit distance approximation and bounded edit distance calculation methods have been proposed. Despite these efforts, the existing edit distance calculation methods are not fast enough when computing larger numbers of pairs of NGS sequences. In order to further reduce the computation time, many NGS sequence similarity methods have been proposed using matching k-mers. These methods extract all possible k-mers from NGS sequences and compare similarity between pairs of sequences based on the shared k-mers. However, these methods reduce the computation time at the cost accuracy.In this dissertation, our goal is to compute NGS sequence similarity using edit distance based methods while reducing the computation time. We propose a few edit distance prediction methods using dataset independent reference sequences that are distant from each other. These reference sequences convert sequences in datasets into feature vectors by computing edit distances between the sequence and each of the reference sequences. Given sequences A, B and a reference sequence r, the edit distance, ed(A.B) 2265 (ed(A, r) 0303ed(B, r)). Since each reference sequence is significantly different from each other, with sufficiently large number of reference sequences and high similarity threshold, the differences of edit distances of A and B with respect to the reference sequences are close to the ed(A,B). Using this property, we predict edit distances in the vector space based on the Euclidean distances and the Chebyshev distances. Further, we develop a small set of deterministically generated reference sequences with maximum distance between each of them to predict higher edit distances more efficiently. This method predicts edit distances between corresponding sub-sequences separately and then merges the partial distances to predict the edit distances between the entire sequences. The computation complexity of this method is linear with respect to sequence length. The proposed edit distance prediction methods are significantly fast while achieving very good accuracy for high similarity thresholds. We have also shown the effectiveness of these methods on agglomerative hierarchical clustering.We also propose an efficient bounded exact edit distance calculation method using the trace [1]. For a given edit distance threshold d, only letters up to d positions apart can be part of an edit operation. Hence, we generate pairs of sub-sequences up to length difference d so that no edit operation is spilled over to the adjacent pairs of sub-sequences. Then we compute the trace cost in such a way that the number of matching letters between the sub-sequences are maximized. This technique does not guarantee locally optimal edit distance, however, it guarantees globally optimal edit distance between the entire sequences for distance up to d. The bounded exact edit distance calculation method is an order of magnitude faster than that of the dynamic programming edit distance calculation method.
Show less
- Title
- INTERPRETABLE ARTIFICIAL INTELLIGENCE USING NONLINEAR DECISION TREES
- Creator
- Dhebar, Yashesh Deepakkumar
- Date
- 2020
- Collection
- Electronic Theses & Dissertations
- Description
-
The recent times have observed a massive application of artificial intelligence (AI) to automate tasks across various domains. The back-end mechanism with which automation occurs is generally black-box. Some of the popular black-box AI methods used to solve an automation task include decision trees (DT), support vector machines (SVM), artificial neural networks (ANN), etc. In the past several years, these black-box AI methods have shown promising performance and have been widely applied and...
Show moreThe recent times have observed a massive application of artificial intelligence (AI) to automate tasks across various domains. The back-end mechanism with which automation occurs is generally black-box. Some of the popular black-box AI methods used to solve an automation task include decision trees (DT), support vector machines (SVM), artificial neural networks (ANN), etc. In the past several years, these black-box AI methods have shown promising performance and have been widely applied and researched across industries and academia. While the black-box AI models have been shown to achieve high performance, the inherent mechanism with which a decision is made is hard to comprehend. This lack of interpretability and transparency of black-box AI methods makes them less trustworthy. In addition to this, the black-box AI models lack in their ability to provide valuable insights regarding the task at hand. Following these limitations of black-box AI models, a natural research direction of developing interpretable and explainable AI models has emerged and has gained an active attention in the machine learning and AI community in the past three years. In this dissertation, we will be focusing on interpretable AI solutions which are being currently developed at the Computational Optimization and Innovation Laboratory (COIN Lab) at Michigan State University. We propose a nonlinear decision tree (NLDT) based framework to produce transparent AI solutions for automation tasks related to classification and control. The recent advancement in non-linear optimization enables us to efficiently derive interpretable AI solutions for various automation tasks. The interpretable and transparent AI models induced using customized optimization techniques show similar or better performance as compared to complex black-box AI models across most of the benchmarks. The results are promising and provide directions to launch future studies in developing efficient transparent AI models.
Show less
- Title
- Advanced Operators for Graph Neural Networks
- Creator
- Ma, Yao
- Date
- 2021
- Collection
- Electronic Theses & Dissertations
- Description
-
Graphs, which encode pairwise relations between entities, are a kind of universal data structure for many real-world data, including social networks, transportation networks, and chemical molecules. Many important applications on these data can be treated as computational tasks on graphs. For example, friend recommendation in social networks can be regarded as a link prediction task and predicting properties of chemical compounds can be treated as a graph classification task. An essential...
Show moreGraphs, which encode pairwise relations between entities, are a kind of universal data structure for many real-world data, including social networks, transportation networks, and chemical molecules. Many important applications on these data can be treated as computational tasks on graphs. For example, friend recommendation in social networks can be regarded as a link prediction task and predicting properties of chemical compounds can be treated as a graph classification task. An essential step to facilitate these tasks is to learn vector representations either for nodes or the entire graphs. Given its great success of representation learning in images and text, deep learning offers great promise for graphs. However, compared to images and text, deep learning on graphs faces immense challenges. Graphs are irregular where nodes are unordered and each of them can have a distinct number of neighbors. Thus, traditional deep learning models cannot be directly applied to graphs, which calls for dedicated efforts for designing novel deep graph models. To help meet this pressing demand, we developed and investigated novel GNN algorithms to generalize deep learning techniques to graph-structured data. Two key operations in GNNs are the graph filtering operation, which aims to refine node representations; and the graph pooling operation, which aims to summarize node representations to obtain a graph representation. In this thesis, we provide deep understandings or develop novel algorithms for these two operations from new perspectives. For graph filtering operations, we propose a unified framework from the perspective of graph signal denoising, which demonstrates that most existing graph filtering operations are conducting feature smoothing. Then, we further investigate what information typical graph filtering operations can capture and how they can be understood beyond feature smoothing. For graph pooling operations, we study the procedure of pooling from the perspective of graph spectral theory and present a novel graph pooling operation. We also propose a technique to downsample nodes considering both mode importance and representativeness, which leads to a novel graph pooling operation.
Show less
- Title
- Efficient and Secure Message Passing for Machine Learning
- Creator
- Liu, Xiaorui
- Date
- 2022
- Collection
- Electronic Theses & Dissertations
- Description
-
Machine learning (ML) techniques have brought revolutionary impact to human society, and they will continue to act as technological innovators in the future. To broaden its impact, it is urgent to solve the emerging and critical challenges in machine learning, such as efficiency and security issues. On the one hand, ML models have become increasingly powerful due to big data and models, but it also brings tremendous challenges in designing efficient optimization algorithms to train the big ML...
Show moreMachine learning (ML) techniques have brought revolutionary impact to human society, and they will continue to act as technological innovators in the future. To broaden its impact, it is urgent to solve the emerging and critical challenges in machine learning, such as efficiency and security issues. On the one hand, ML models have become increasingly powerful due to big data and models, but it also brings tremendous challenges in designing efficient optimization algorithms to train the big ML models from big data. The most effective way for large-scale ML is to parallelize the computation tasks on distributed systems composed of many computational devices. However, in practice, the scalability and efficiency of the systems are greatly limited by information synchronization since the message passing between the devices dominates the total running time. In other words, the major bottleneck lies in the high communication cost between devices, especially when the scale of the system and the models becomes larger while the communication bandwidth is relatively limited. This communication bottleneck often limits the practical speedup of distributed ML systems. On the other hand, recent research has generally revealed that many ML models suffer from security vulnerabilities. In particular, deep learning models can be easily deceived by the unnoticeable perturbations in data. Meanwhile, graph is a kind of prevalent data structure for many real-world data that encodes pairwise relations between entities such as social networks, transportation networks, and chemical molecules. Graph neural networks (GNNs) generalize and extend the representation learning power of traditional deep neural networks (DNNs) from regular grids, such as image, video, and text, to irregular graph-structured data through message passing frameworks. Therefore, many important applications on these data can be treated as computational tasks on graphs, such as recommender systems, social network analysis, traffic prediction, etc. Unfortunately, the vulnerability of deep learning models also translates to GNNs, which raises significant concerns about their applications, especially in safety-critical areas. Therefore, it is critical to design intrinsically secure ML models for graph-structured data.The primary objective of this dissertation is to figure out the solutions to solve these challenges via innovative research and principled methods. In particular, we propose multiple distributed optimization algorithms with efficient message passing to mitigate the communication bottleneck and speed up ML model training in distributed ML systems. We also propose multiple secure message passing schemes as the building blocks of graph neural networks aiming to significantly improve the security and robustness of ML models.
Show less
- Title
- Efficient Distributed Algorithms : Better Theory and Communication Compression
- Creator
- LI, YAO
- Date
- 2022
- Collection
- Electronic Theses & Dissertations
- Description
-
Large-scale machine learning models are often trained by distributed algorithms over either centralized or decentralized networks. The former uses a central server to aggregate the information of local computing agents and broadcast the averaged parameters in a master-slave architecture. The latter considers a connected network formed by all agents. The information can only be exchanged with accessible neighbors with a mixing matrix of communication weights encoding the network's topology....
Show moreLarge-scale machine learning models are often trained by distributed algorithms over either centralized or decentralized networks. The former uses a central server to aggregate the information of local computing agents and broadcast the averaged parameters in a master-slave architecture. The latter considers a connected network formed by all agents. The information can only be exchanged with accessible neighbors with a mixing matrix of communication weights encoding the network's topology. Compared with centralized optimization, decentralization facilitates data privacy and reduces the communication burden of the single central agent due to model synchronization, but the connectivity of the communication network weakens the theoretical convergence complexity of the decentralized algorithms. Therefore, there are still gaps between decentralized and centralized algorithms in terms of convergence conditions and rates. In the first part of this dissertation, we consider two decentralized algorithms: EXTRA and NIDS, which both converge linearly with strongly convex objective functions and answer two questions regarding them. \textit{What are the optimal upper bounds for their stepsizes?} \textit{Do decentralized algorithms require more properties on the functions for linear convergence than centralized ones?} More specifically, we relax the required conditions for linear convergence of both algorithms. For EXTRA, we show that the stepsize is comparable to that of centralized algorithms. For NIDS, the upper bound of the stepsize is shown to be exactly the same as the centralized ones. In addition, we relax the requirement for the objective functions and the mixing matrices. We provide the linear convergence results for both algorithms under the weakest conditions.As the number of computing agents and the dimension of the model increase, the communication cost of parameter synchronization becomes the major obstacle to efficient learning. Communication compression techniques have exhibited great potential as an antidote to accelerate distributed machine learning by mitigating the communication bottleneck. In the rest of the dissertation, we propose compressed residual communication frameworks for both centralized and decentralized optimization and design different algorithms to achieve efficient communication. For centralized optimization, we propose DORE, a modified parallel stochastic gradient descent method with a bidirectional residual compression, to reduce over $95\%$ of the overall communication. Our theoretical analysis demonstrates that the proposed strategy has superior convergence properties for both strongly convex and nonconvex objective functions. Existing works mainly focus on smooth problems and compressing DGD-type algorithms for decentralized optimization. The class of smooth objective functions and the sublinear convergence rate under relatively strong assumptions limit these algorithms' application and practical performance. Motivated by primal-dual algorithms, we propose Prox-LEAD, a linear convergent decentralized algorithm with compression, to tackle strongly convex problems with a nonsmooth regularizer. Our theory describes the coupled dynamics of the inexact primal and dual update as well as compression error without assuming bounded gradients. The superiority of the proposed algorithm is demonstrated through the comparison with state-of-the-art algorithms in terms of convergence complexities and numerical experiments. Our algorithmic framework also generally enlightens the compressed communication on other primal-dual algorithms by reducing the impact of inexact iterations.
Show less
- Title
- Adaptive and Automated Deep Recommender Systems
- Creator
- Zhao, Xiangyu
- Date
- 2021
- Collection
- Electronic Theses & Dissertations
- Description
-
Recommender systems are intelligent information retrieval applications, and have been leveraged in numerous domains such as e-commerce, movies, music, books, and point-of-interests. They play a crucial role in the users' information-seeking process, and overcome the information overload issue by recommending personalized items (products, services, or information) that best match users' needs and preferences. Driven by the recent advances in machine learning theories and the prevalence of deep...
Show moreRecommender systems are intelligent information retrieval applications, and have been leveraged in numerous domains such as e-commerce, movies, music, books, and point-of-interests. They play a crucial role in the users' information-seeking process, and overcome the information overload issue by recommending personalized items (products, services, or information) that best match users' needs and preferences. Driven by the recent advances in machine learning theories and the prevalence of deep learning techniques, there have been tremendous interests in developing deep learning based recommender systems. They have unprecedentedly advanced effectiveness of mining the non-linear user-item relationships and learning the feature representations from massive datasets, which produce great vitality and improvements in recommendations from both academic and industry communities.Despite above prominence of existing deep recommender systems, their adaptiveness and automation still remain under-explored. Thus, in this dissertation, we study the problem of adaptive and automated deep recommender systems. Specifically, we present our efforts devoted to building adaptive deep recommender systems to continuously update recommendation strategies according to the dynamic nature of user preference, which maximizes the cumulative reward from users in the practical streaming recommendation scenarios. In addition, we propose a group of automated and systematic approaches that design deep recommender system frameworks effectively and efficiently from a data-driven manner. More importantly, we apply our proposed models into a variety of real-world recommendation platforms and have achieved promising enhancements of social and economic benefits.
Show less
- Title
- GENERATIVE SIGNAL PROCESSING THROUGH MULTILAYER MULTISCALE WAVELET MODELS
- Creator
- He, Jieqian
- Date
- 2021
- Collection
- Electronic Theses & Dissertations
- Description
-
Wavelet analysis and deep learning are two popular fields for signal processing. The scattering transform from wavelet analysis is a recently proposed mathematical model for convolution neural networks. Signals with repeated patterns can be analyzed using the statistics from such models. Specifically, signals from certain classes can be recovered from related statistics. We first focus on recovering 1D deterministic dirac signals from multiscale statistics. We prove a dirac signal can be...
Show moreWavelet analysis and deep learning are two popular fields for signal processing. The scattering transform from wavelet analysis is a recently proposed mathematical model for convolution neural networks. Signals with repeated patterns can be analyzed using the statistics from such models. Specifically, signals from certain classes can be recovered from related statistics. We first focus on recovering 1D deterministic dirac signals from multiscale statistics. We prove a dirac signal can be recovered from multiscale statistics up to a translation and reflection. Then we switch to a stochastic version, modeled using Poisson point processes, and prove wavelet statistics at small scales capture the intensity parameter of Poisson point processes. We also design a scattering generative adversarial network (GAN) to generate new Poisson point samples from statistics of multiple given samples. Next we consider texture images. We successfully synthesize new textures given one sample from the texture class through multiscale, multilayer wavelet models. Finally, we analyze and prove why the multiscale multilayer model is essential for signal recovery, especially natural texture images.
Show less
- Title
- Learning to Detect Language Markers
- Creator
- Tang, Fengyi
- Date
- 2021
- Collection
- Electronic Theses & Dissertations
- Description
-
In the world of medical informatics, biomarkers play a pivotal role in determining the physical state of human beings, distinguishing the pathologic from the clinically normal. In recent years, behavioral markers, due to their availability and low cost, have attracted a lot of attention as a potential supplement to biomarkers. “Language markers” such as spoken words and lexical preference have been shown to be both cost-effective as well as predictive of complex diseases such as mild...
Show moreIn the world of medical informatics, biomarkers play a pivotal role in determining the physical state of human beings, distinguishing the pathologic from the clinically normal. In recent years, behavioral markers, due to their availability and low cost, have attracted a lot of attention as a potential supplement to biomarkers. “Language markers” such as spoken words and lexical preference have been shown to be both cost-effective as well as predictive of complex diseases such as mild cognitive impairment (MCI).However, language markers, although universal, do not possess many of the favorable properties that characterize traditional biomakers. For example, different people may exhibit similar use of language under certain conversational contexts (non-unique), and a person's lexical preferences may change over time (non-stationary). As a result, it is unclear whether any set of language markers can be measured in a consistent manner. My thesis projects provide solutions to some of the limitations of language markers: (1) We formalize the problem of learning a dialog policy to measure language markers as an optimization problem which we call persona authentication. We provide a learning algorithm for finding such a dialog policy that can generalize to unseen personalities. (2) We apply our dialog policy framework on real-world data for MCI prediction and show that the proposed pipeline improves prediction against supervised learning baselines. (3) To address non-stationarity, we introduce an effective way to do temporally-dependent and non-i.i.d. feature selection through an adversarial learning framework which we call precision sensing. (4) Finally, on the prediction side, we propose a method for improving the sample efficiency of classifiers by retaining privileged information (auxiliary features available only at training time).
Show less
- Title
- OPTIMIZATION OF LARGE SCALE ITERATIVE EIGENSOLVERS
- Creator
- Afibuzzaman, Md
- Date
- 2021
- Collection
- Electronic Theses & Dissertations
- Description
-
Sparse matrix computations, in the form of solvers for systems of linear equations, eigenvalue problem or matrix factorizations constitute the main kernel in problems from fields as diverse as computational fluid dynamics, quantum many body problems, machine learning and graph analytics. Iterative eigensolvers have been preferred over the regular method because the regular method not being feasible with industrial sized matrices. Although dense linear algebra libraries like BLAS, LAPACK,...
Show moreSparse matrix computations, in the form of solvers for systems of linear equations, eigenvalue problem or matrix factorizations constitute the main kernel in problems from fields as diverse as computational fluid dynamics, quantum many body problems, machine learning and graph analytics. Iterative eigensolvers have been preferred over the regular method because the regular method not being feasible with industrial sized matrices. Although dense linear algebra libraries like BLAS, LAPACK, SCALAPACK are well established and some vendor optimized implementation like mkl from Intel or Cray Libsci exist, it is not the same case for sparse linear algebra which is lagging far behind. The main reason behind slow progress in the standardization of sparse linear algebra or library development is the different forms and properties depending on the application area. It is worsened for deep memory hierarchies of modern architectures due to low arithmetic intensities and memory bound computations. Minimization of data movement and fast access to the matrix are critical in this case. Since the current technology is driven by deep memory architectures where we get the increased capacity at the expense of increased latency and decreased bandwidth when we go further from the processors. The key to achieve high performance in sparse matrix computations in deep memory hierarchy is to minimize data movement across layers of the memory and overlap data movement with computations. My thesis work contributes towards addressing the algorithmic challenges and developing a computational infrastructure to achieve high performance in scientific applications for both shared memory and distributed memory architectures. For this purpose, I started working on optimizing a blocked eigensolver and optimized specific computational kernels which uses a new storage format. Using this optimization as a building block, we introduce a shared memory task parallel framework focusing on optimizing the entire solvers rather than a specific kernel. Before extending this shared memory implementation to a distributed memory architecture, I simulated the communication pattern and overheads of a large scale distributed memory application and then I introduce the communication tasks in the framework to overlap communication and computation. Additionally, I also tried to find a custom scheduler for the tasks using a graph partitioner. To get acquainted with high performance computing and parallel libraries, I started my PhD journey with optimizing a DFT code named Sky3D where I used dense matrix libraries. Despite there might not be any single solution for this problem, I tried to find an optimized solution. Though the large distributed memory application MFDn is kind of the driver project of the thesis, but the framework we developed is not confined to MFDn only, rather it can be used for other scientific applications too. The output of this thesis is the task parallel HPC infrastructure that we envisioned for both shared and distributed memory architectures.
Show less
- Title
- Sparse Large-Scale Multi-Objective Optimization for Climate-Smart Agricultural Innovation
- Creator
- Kropp, Ian Meyer
- Date
- 2022
- Collection
- Electronic Theses & Dissertations
- Description
-
The challenge of our generation is to produce enough food to feed the present and future global population. This is no simple task, as the world population is expanding and becoming more affluent, and conventional agriculture often degrades the environment. Without a healthy and functional environment, agriculture as we know it will fail. Therefore, we must equally balance our broad goals of sustainability and food production as a single system. Multi-objective optimization, algorithms that...
Show moreThe challenge of our generation is to produce enough food to feed the present and future global population. This is no simple task, as the world population is expanding and becoming more affluent, and conventional agriculture often degrades the environment. Without a healthy and functional environment, agriculture as we know it will fail. Therefore, we must equally balance our broad goals of sustainability and food production as a single system. Multi-objective optimization, algorithms that search for solutions to complex problems that contain conflicting objectives, is an effective tool for balancing these two goals. In this dissertation, we apply multi-objective optimization to find optimal management practices for irrigating and fertilizing corn. There are two areas for improvement in multi-objective optimization of corn management: existing methods run burdensomely slow and do not account for the uncertainty of weather. Improving run-time and optimizing in the face of weather uncertainty are the two goals of this dissertation. We address these goals with four novel methodologies that advance the fields of biosystems & agricultural engineering, as well as computer science engineering. In the first study, we address the first goal by drastically improving the performance of evolutionary multi-objective algorithms for sparse large-scale optimization problems. Sparse optimization, such as irrigation and nutrient management, are problems whose optimal solutions are mostly zero. Our novel algorithm, called sparse population sampling (SPS), integrates with and improves all population-based algorithms over almost all test scenarios. SPS, when used with NSGA-II, was able to outperform the existing state-of-the-art algorithms with the most complex of sparse large-scale optimization problems (i.e., 2,500 or more decision variables). The second study addressed the second goal by optimizing common management practices in a study site in Cass County, Michigan, for all climate scenarios. This methodology, which relied on SPS from the first goal, implements the concept of innovization in agriculture. In our innovization framework, 30 years of management practices were optimized against observed weather data, which in turn was compared to common practices in Cass County, Michigan. The differences between the optimal solutions and common practices were transformed into simple recommendations for farmers to apply during future growing seasons. Our recommendations drastically increased yields under 420 validation scenarios with no impact on nitrogen leaching. The third study further improves the performance of sparse large-scale optimization. Where SPS was a single component of a population-based algorithm, our proposed method, S-NSGA-II, is a novel and complete evolutionary algorithm for sparse large-scale optimization problems. Our algorithm outperforms or performs as well as other contemporary sparse large-scale optimization algorithms, especially in problems with more than 800 decision variables. This enhanced convergence will further improve multi-objective optimization in agriculture. Our final study, which addresses the second goal, takes a different approach to optimizing agricultural systems in the face of climate uncertainty. In this study, we use stochastic weather to quantify risk in optimization. In this way, farmers can choose between optimal management decisions with full understanding of the risks involved in every management decision.
Show less
- Title
- Deep Convolutional Networks for Modeling Geo-Spatio-Temporal Relationships and Extremes
- Creator
- Wilson, Tyler
- Date
- 2021
- Collection
- Electronic Theses & Dissertations
- Description
-
Geo-spatio-temporal data are valuable for a broad range of applications including traffic forecasting, weather prediction, detection of epidemic outbreaks, and crime monitoring. Data driven approaches to these problems must address several fundamental challenges such as handling the %The two we focus on are the importance ofgeo-spatio-temporal relationships and extreme events. Another recent technological shift has been the success of deep learning especially in applications such as computer...
Show moreGeo-spatio-temporal data are valuable for a broad range of applications including traffic forecasting, weather prediction, detection of epidemic outbreaks, and crime monitoring. Data driven approaches to these problems must address several fundamental challenges such as handling the %The two we focus on are the importance ofgeo-spatio-temporal relationships and extreme events. Another recent technological shift has been the success of deep learning especially in applications such as computer vision, speech recognition, and natural language processing. In this work, we argue that deep learning is a promising approach for many geo-spatio-temporal problems and highlight how it can be used to address the challenges of modeling geo-spatio-temporal relationships and extremes. Though previous research has established techniques for modeling spatio-temporal relationships, these approaches are often limited to gridded spatial data with fixed-length feature vectors and considered only spatial relationships among the features, while ignoring the relationships among model parameters.We begin by describing how the spatial and temporal relationships for non-gridded spatial data can be modeled simultaneously by coupling the graph convolutional network with a long short-term memory (LSTM) network. Unlike previous research, our framework treats the adjacency matrix associated with the spatial data as a model parameter that can be learned from data, with constraints on its sparsity and rank to reduce the number of estimated parameters.Further, we show that the learned adjacency matrix may reveal useful information about the dominant spatial relationships that exist within the data. Second, we explore the varieties of spatial relationships that may exist in a geo-spatial prediction task. Specifically, we distinguish between spatial relationships among predictors and the spatial relationships among model parameters at different locations. We demonstrate an approach for modeling spatial dependencies among model parameters using graph convolution and provide guidance on when convolution of each type can be effectively applied. We evaluate our proposed approach on a climate downscaling and weather prediction tasks. Next, we introduce DeepGPD, a novel deep learning framework for predicting the distribution of geo-spatio-temporal extreme events. We draw on research in extreme value theory and use the generalized Pareto distribution (GPD) to model the distribution of excesses over a threshold. The GPD is integrated into our deep learning framework to learn the distribution of future excess values while incorporating the geo-spatio-temporal relationships present in the data. This requires a novel reparameterization of the GPD to ensure that its constraints are satisfied by the outputs of the neural network. We demonstrate the effectiveness of our proposed approach on a real-world precipitation data set. DeepGPD also employs a deep set architecture to handle the variable-sized feature sets corresponding to excess values from previous time steps as its predictors. Finally, we extend the DeepGPD formulation to simultaneously predict the distribution of extreme events and accurately infer their point estimates. Doing so requires modeling the full distribution of the data not just its extreme values. We propose DEMM, a deep mixture model for modeling the distribution of both excess and non-excess values. To ensure the point estimation of DEMM is a feasible value, new constraints on the output of the neural network are introduced, which requires a new reparameterization of the model parameters of the GPD. We conclude by discussing possibilities for further research at the intersection of deep learning and geo-spatio-temporal data.
Show less