You are here
Search results
(1 - 20 of 30)
Pages
- Title
- Robust multi-task learning algorithms for predictive modeling of spatial and temporal data
- Creator
- Liu, Xi (Graduate of Michigan State University)
- Date
- 2019
- Collection
- Electronic Theses & Dissertations
- Description
-
"Recent years have witnessed the significant growth of spatial and temporal data generated from various disciplines, including geophysical sciences, neuroscience, economics, criminology, and epidemiology. Such data have been extensively used to train spatial and temporal models that can make predictions either at multiple locations simultaneously or along multiple forecasting horizons (lead times). However, training an accurate prediction model in these domains can be challenging especially...
Show more"Recent years have witnessed the significant growth of spatial and temporal data generated from various disciplines, including geophysical sciences, neuroscience, economics, criminology, and epidemiology. Such data have been extensively used to train spatial and temporal models that can make predictions either at multiple locations simultaneously or along multiple forecasting horizons (lead times). However, training an accurate prediction model in these domains can be challenging especially when there are significant noise and missing values or limited training examples available. The goal of this thesis is to develop novel multi-task learning frameworks that can exploit the spatial and/or temporal dependencies of the data to ensure robust predictions in spite of the data quality and scarcity problems. The first framework developed in this dissertation is designed for multi-task classification of time series data. Specifically, the prediction task here is to continuously classify activities of a human subject based on the multi-modal sensor data collected in a smart home environment. As the classes exhibit strong spatial and temporal dependencies, this makes it an ideal setting for applying a multi-task learning approach. Nevertheless, since the type of sensors deployed often vary from one room (location) to another, this introduces a structured missing value problem, in which blocks of sensor data could be missing when a subject moves from one room to another. To address this challenge, a probabilistic multi-task classification framework is developed to jointly model the activity recognition tasks from all the rooms, taking into account the block-missing value problem. The framework also learns the transitional dependencies between classes to improve its overall prediction accuracy. The second framework is developed for the multi-location time series forecasting problem. Although multi-task learning has been successfully applied to many time series forecasting applications such as climate prediction, conventional approaches aim to minimize only the point-wise residual error of their predictions instead of considering how well their models fit the overall distribution of the response variable. As a result, their predicted distribution may not fully capture the true distribution of the data. In this thesis, a novel distribution-preserving multi-task learning framework is proposed for the multi-location time series forecasting problem. The framework uses a non-parametric density estimation approach to fit the distribution of the response variable and employs an L2-distance function to minimize the divergence between the predicted and true distributions. The third framework proposed in this dissertation is for the multi-step-ahead (long-range) time series prediction problem with application to ensemble forecasting of sea surface temperature. Specifically, our goal is to effectively combine the forecasts generated by various numerical models at different lead times to obtain more precise predictions. Towards this end, a multi-task deep learning framework based on a hierarchical LSTM architecture is proposed to jointly model the ensemble forecasts of different models, taking into account the temporal dependencies between forecasts at different lead times. Experiments performed on 29-year sea surface temperature data from North American Multi-Model Ensemble (NAMME) demonstrate that the proposed architecture significantly outperforms standard LSTM and other MTL approaches."--Pages ii-iii.
Show less
- Title
- Secure and efficient spectrum sharing and QoS analysis in OFDM-based heterogeneous wireless networks
- Creator
- Alahmadi, Ahmed S.
- Date
- 2016
- Collection
- Electronic Theses & Dissertations
- Description
-
"The Internet of Things (IoT), which networks versatile devices for information exchange, remote sensing, monitoring and control, is finding promising applications in nearly every field. However, due to its high density and enormous spectrum requirement, the practical development of IoT technology seems to be not available until the release of the large millimeter wave (mmWave) band (30GHz-300GHz). Compared to existing lower band systems (such as 3G, 4G), mmWave band signals generally require...
Show more"The Internet of Things (IoT), which networks versatile devices for information exchange, remote sensing, monitoring and control, is finding promising applications in nearly every field. However, due to its high density and enormous spectrum requirement, the practical development of IoT technology seems to be not available until the release of the large millimeter wave (mmWave) band (30GHz-300GHz). Compared to existing lower band systems (such as 3G, 4G), mmWave band signals generally require line of sight (LOS) path and suffer from severe fading effects, leading to much smaller coverage area. For network design and management, this implies that: (i) MmWave band alone could not support the IoT networks, but has to be integrated with the existing lower band systems through secure and effective spectrum sharing, especially in the lower frequency bands; and (ii) The IoT networks will have very high density node distribution, which is a significant challenge in network design, especially with the scarce energy budget of IoT applications. Motivated by these observations, in this dissertation, we consider three problems: (1) How to achieve secure and effective spectrum sharing? (2) How to accommodate the energy limited IoT devices? (3) How to evaluate the Quality of Service (QoS) in the high density IoT networks? We aim to develop innovative techniques for the design, evaluation and management of future IoT networks under both benign and hostile environments. The main contributions of this dissertation are outlined as follows. First, we develop a secure and efficient spectrum sharing scheme in single-carrier wireless networks. Cognitive radio (CR) is a key enabling technology for spectrum sharing, where the unoccupied spectrum is identified for secondary users (SUs), without interfering with the primary user (PU). A serious security threat to the CR networks is referred to as primary user emulation attack (PUEA), in which a malicious user (MU) emulates the signal characteristics of the PU, thereby causing the SUs to erroneously identify the attacker as the PU. Here, we consider full-band PUEA detection and propose a reliable AES-assisted DTV scheme, where an AES-encrypted reference signal is generated at the DTV transmitter and used as the sync bits of the DTV data frames. For PU detection, we investigate the cross-correlation between the received sequence and reference sequence. The MU detection can be performed by investigating the auto-correlation of the received sequence. We further develop a secure and efficient spectrum sharing scheme in multi-carrier wireless networks. We consider sub-band malicious user detection and propose a secure AES-based DTV scheme, where the existing reference sequence used to generate the pilot symbols in the DVB-T2 frames is encrypted using the AES algorithm. The resulted sequence is exploited for accurate detection of the authorized PU and the MU. Second, we develop an energy efficient transmission scheme in CR networks using energy harvesting. We propose a transmitting scheme for the SUs such that each SU can perform information reception and energy harvesting simultaneously. We perform sum-rate optimization for the SUs under PUEA. It is observed that the sum-rate of the SU network can be improved significantly with the energy harvesting technique. Potentially, the proposed scheme can be applied directly to the energy-constrained IoT networks. Finally, we investigate QoS performance analysis methodologies, which can provide insightful feedbacks to IoT network design and planning. Taking the spatial randomness of the IoT network into consideration, we investigate coverage probability (CP) and blocking probability (BP) in relay-assisted OFDMA networks using stochastic geometry. More specifically, we model the inter-cell interference from the neighboring cells at each typical node, and derive the CP in the downlink transmissions. Based on their data rate requirements, we classify the incoming users into different classes, and calculate the BP using the multi-dimensional loss model."--Pages ii-iii.
Show less
- Title
- Automated addition of fault-tolerance via lazy repair and graceful degradation
- Creator
- Lin, Yiyan
- Date
- 2015
- Collection
- Electronic Theses & Dissertations
- Description
-
In this dissertation, we concentrate on the problem of automated addition of fault-tolerance that transforms a fault-intolerant program to be a fault-tolerant program. We solve this problem via model repair. Model repair is a correct-by-construct technique to revise an existing model so that the revised model satisfies the given correctness criteria, such as safety, liveness, or fault-tolerance. We consider two problems of using model repair to add fault-tolerance. First, if the repaired...
Show moreIn this dissertation, we concentrate on the problem of automated addition of fault-tolerance that transforms a fault-intolerant program to be a fault-tolerant program. We solve this problem via model repair. Model repair is a correct-by-construct technique to revise an existing model so that the revised model satisfies the given correctness criteria, such as safety, liveness, or fault-tolerance. We consider two problems of using model repair to add fault-tolerance. First, if the repaired model violates the assumptions (e.g., partial observability, inability to detect crashed processes, etc) made in the underlying system, then it cannot be implemented. We denote these requirements as realizability constraints. Second, the addition of fault-tolerance may fail if the program cannot fully recover after certain faults occur. In this dissertation, we propose a lazy repair approach to address realizability issues in adding fault-tolerance. Additionally, we propose a technique to automatically add graceful degradation to a program, so that the program can recover with partial functionality (that is identified by the designer to be the critical functionality) if full recovery is impossible.A model repair technique transforms a model to another model that satisfies a new set of properties. Such a transformation should also maintain the mapping between the model and the underlying program. For example, in a distributed program, every process is restricted to read (or write) some variables in other processes. A model that represents this program should also disallow the process to read (or write) those inaccessable variables. If these constraints are violated, then the corresponding model will be unrealizable. An unrealizable model (in this context, a model that violates the read/write restrictions) may make it impossible to obtain the corresponding implementation.%In this dissertation, we call the read (or write) restriction as a realizability constraint in distributed systems. An unrealizable model (a model that violates the realizability constraints) may complicate the implementation by introducing extra amount of modification to the program. Such modification may in turn break the program's correctness.Resolving realizability constraints increases the complexity of model repair. Existing model repair techniques introduce heuristics to reduce the complexity. However, this heuristic-based approach is designed and optimized specifically for distributed programs. We need a more generic model repair approach for other types of programs, e.g., synchronous programs, cyber-physical programs, etc. Hence, in this dissertation, we propose a model repair technique, i.e., lazy repair, to add fault-tolerance to programs with different types of realizability constraints. It involves two steps. First, we only focus on repairing to obtain a model that satisfies correctness criteria while ignoring realizability constraints. In the second step, we repair this model further by removing behaviors while ensuring that the desired specification is preserved. The lazy repair approach simplifies the process of developing heuristics, and provides a tradeoff in terms of the time saved in the first step and the extra work required in the second step. We demonstrate that lazy repair is applicable in the context of distributed systems, synchronous systems and cyber-physical systems.In addition, safety critical systems such as airplanes, automobiles and elevators should operate with high dependability in the presence of faults. If the occurrence of faults breaks down some components, the system may not be able to fully recover. In this scenario, the system can still operate with remaining resources and deliver partial but core functionality, i.e., to display graceful degradation. Existing model repair approaches, such as addition of fault-tolerance, cannot transform a program to provide graceful degradation. In this dissertation, we propose a technique to add fault-tolerance to a program with graceful degradation. In the absence of faults, such a program exhibits ideal behaviors. In the presence of faults, the program is allowed to recover with reduced functionality. This technique involves two steps. First, it automatically generates a program with graceful degradation based on the input fault-intolerant program. Second, it adds fault-tolerance to the output program from first step. We demonstrate that this technique is applicable in the context of high atomicity programs as well as low atomicity programs (i.e., distributed programs). We also present a case study on adding multi-graceful degradation to a dangerous gas detection and ventilation system. Through this case study, we show that our approach can assist the designer to obtain a program that behaves like the deployed system.
Show less
- Title
- Computational identification and analysis of non-coding RNAs in large-scale biological data
- Creator
- Lei, Jikai
- Date
- 2015
- Collection
- Electronic Theses & Dissertations
- Description
-
Non-protein-coding RNAs (ncRNAs) are RNA molecules that function directly at the level of RNA without translating into protein. They play important biological functions in all three domains of life, i.e. Eukarya, Bacteria and Archaea. To understand the working mechanisms and the functions of ncRNAs in various species, a fundamental step is to identify both known and novel ncRNAs from large-scale biological data.Large-scale genomic data includes both genomic sequence data and NGS sequencing...
Show moreNon-protein-coding RNAs (ncRNAs) are RNA molecules that function directly at the level of RNA without translating into protein. They play important biological functions in all three domains of life, i.e. Eukarya, Bacteria and Archaea. To understand the working mechanisms and the functions of ncRNAs in various species, a fundamental step is to identify both known and novel ncRNAs from large-scale biological data.Large-scale genomic data includes both genomic sequence data and NGS sequencing data. Both types of genomic data provide great opportunity for identifying ncRNAs. For genomic sequence data, a lot of ncRNA identification tools that use comparative sequence analysis have been developed. These methods work well for ncRNAs that have strong sequence similarity. However, they are not well-suited for detecting ncRNAs that are remotely homologous. Next generation sequencing (NGS), while it opens a new horizon for annotating and understanding known and novel ncRNAs, also introduces many challenges. First, existing genomic sequence searching tools can not be readily applied to NGS data because NGS technology produces short, fragmentary reads. Second, most NGS data sets are large-scale. Existing algorithms are infeasible on NGS data because of high resource requirements. Third, metagenomic sequencing, which utilizes NGS technology to sequence uncultured, complex microbial communities directly from their natural inhabitants, further aggravates the difficulties. Thus, massive amount of genomic sequence data and NGS data calls for efficient algorithms and tools for ncRNA annotation.In this dissertation, I present three computational methods and tools to efficiently identify ncRNAs from large-scale biological data. Chain-RNA is a tool that combines both sequence similarity and structure similarity to locate cross-species conserved RNA elements with low sequence similarity in genomic sequence data. It can achieve significantly higher sensitivity in identifying remotely conserved ncRNA elements than sequence based methods such as BLAST, and is much faster than existing structural alignment tools. miR-PREFeR (miRNA PREdiction From small RNA-Seq data) utilizes expression patterns of miRNA and follows the criteria for plant microRNA annotation to accurately predict plant miRNAs from one or more small RNA-Seq data samples. It is sensitive, accurate, fast and has low-memory footprint. metaCRISPR focuses on identifying Clustered Regularly Interspaced Short Palindromic Repeats (CRISPRs) from large-scale metagenomic sequencing data. It uses a kmer hash table to efficiently detect reads that belong to CRISPRs from the raw metagonmic data set. Overlap graph based clustering is then conducted on the reduced data set to separate different CRSIPRs. A set of graph based algorithms are used to assemble and recover CRISPRs from the clusters.
Show less
- Title
- Fluid animation on deforming surface meshes
- Creator
- Wang, Xiaojun (Graduate of Michigan State University)
- Date
- 2017
- Collection
- Electronic Theses & Dissertations
- Description
-
"We explore methods for visually plausible fluid simulation on deforming surfaces with inhomogeneous diffusion properties. While there are methods for fluid simulation on surfaces, not much research effort focused on the influence of the motion of underlying surface, in particular when it is not a rigid surface, such as knitted or woven textiles in motion. The complexity involved makes the simulation challenging to account for the non-inertial local frames typically used to describe the...
Show more"We explore methods for visually plausible fluid simulation on deforming surfaces with inhomogeneous diffusion properties. While there are methods for fluid simulation on surfaces, not much research effort focused on the influence of the motion of underlying surface, in particular when it is not a rigid surface, such as knitted or woven textiles in motion. The complexity involved makes the simulation challenging to account for the non-inertial local frames typically used to describe the motion and the anisotropic effects in diffusion, absorption, adsorption. Thus, our primary goal is to enable fast and stable method for such scenarios. First, in preparation of the material properties for the surface domain, we describe textiles with salient feature direction by bulk material property tensors in order to reduce the complexity, by employing 2D homogenization technique, which effectively turns microscale inhomogeneous properties into homogeneous properties in macroscale descriptions. We then use standard texture mapping techniques to map these tensors to triangles in the curved surface mesh, taking into account the alignment of each local tangent space with correct feature directions of the macroscale tensor. We show that this homogenization tool is intuitive, flexible and easily adjusted. Second, for efficient description of the deforming surface, we offer a new geometry representation for the surface with solely angles instead of vertex coordinates, to reduce storage for the motion of underlying surface. Since our simulation tool relies heavily on long sequences of 3D curved triangular meshes, it is worthwhile exploring such efficient representations to make our tool practical by reducing the memory access during real-time simulations as well as reducing the file sizes. Inspired by angle-based representations for tetrahedral meshes, we use spectral method to restore curved surface using both angles of the triangles and dihedral angles between adjacent triangles in the mesh. Moreover, in many surface deformation sequences, it is often sufficient to update the dihedral angles while keeping the triangle interior angles fixed. Third, we propose a framework for simulating various effects of fluid flowing on deforming surfaces. We directly applied our simulator on curved surface meshes instead of in parameter domains, whereas many existing simulation methods require a parameterization on the surface. We further demonstrate that fictitious forces induced by the surface motion can be added to the surface-based simulation at a small additional cost. These fictitious forces can be decomposed into different components. Only the rectilinear and Coriolis components are relevant to our choice of local frames. Other effects, such as diffusion, adsorption, absorption, and evaporation are also incorporated for realistic stain simulation. Finally, we explore the extraction of Lagrangian Coherent Structure (LCS), which is often referred to as the skeleton of fluid motion. The LCS structures are often described by ridges of the finite time Lyapunov exponent (FTLE) fields, which describe the extremal stretching of fluid parcels following the flow. We proposed a novel improvement to the ridge marching algorithm, which extract such ridges robustly for the typically noisy FTLE estimates even in well-defined fluid flows. Our results are potentially applicable to visualizing and controlling fluid trajectory patterns. In contrast to current methods for LCS calculation, which are only applicable to flat 2D or 3D domains and sensitive to noise, our ridge extraction is readily applicable to curved surfaces even when they are deforming. The collection of these computational tools will facilitate generation of realistic and easy to adjust surface fluid animation with various physically plausible effects on surface."--Pages ii-iii.
Show less
- Title
- Reliable 5G system design and networking
- Creator
- Liang, Yuan (Graduate of Michigan State University)
- Date
- 2019
- Collection
- Electronic Theses & Dissertations
- Description
-
The upcoming fifth generation (5G) system is expected to support a variety of different devices and applications, such as ultra-reliable and low latency communications, Internet of Things (IoT) and mobile cloud computing. Reliable and effective communications lie in the core of the 5G system design. This dissertation is focused on the design and evaluation of robust 5G systems under both benign and malicious environments, with considerations on both the physical layer and higher layers. For...
Show moreThe upcoming fifth generation (5G) system is expected to support a variety of different devices and applications, such as ultra-reliable and low latency communications, Internet of Things (IoT) and mobile cloud computing. Reliable and effective communications lie in the core of the 5G system design. This dissertation is focused on the design and evaluation of robust 5G systems under both benign and malicious environments, with considerations on both the physical layer and higher layers. For the physical layer, we study secure and efficient 5G transceiver under hostile jamming. We propose a securely precoded OFDM (SP-OFDM) system for efficient and reliable transmission under disguised jamming, a serious threat to 5G, where the jammer intentionally confuses the receiver by mimicking the characteristics of the authorized signal, and causes complete communication failure. We bring off a dynamic constellation by introducing secure randomness between the legitimate transmitter and receiver, and hence break the symmetricity between the authorized signal and the disguised jamming. It is shown that due to the secure randomness shared between the authorized transmitter and receiver, SP-OFDM can achieve a positive channel capacity under disguised jamming. The robustness of the proposed SP-OFDM scheme under disguised jamming is demonstrated through both theoretic and numerical analyses. We further address the problem of finding the worst jamming distribution in terms of channel capacity for the SP-OFDM system. We consider a practical communication scenario, where the transmitting symbols are uniformly distributed over a discrete and finite alphabet, and the jamming interference is subject to an average power constraint, but may or may not have a peak power constraint. Using tools in functional analysis and complex analysis, first, we prove the existence and uniqueness of the worst jamming distribution. Second, by analyzing the Kuhn-Tucker conditions for the worst jamming, we prove that the worst jamming distribution is discrete in amplitude with a finite number of mass points. For the higher layers, we start with the modeling of 5G high-density heterogeneous networks. We investigate the effect of relay randomness on the end-to-end throughput in multi-hop wireless networks using stochastic geometry. We model the nodes as Poisson Point Processes and calculate the spatial average of the throughput over all potential geometrical patterns of the nodes. More specifically, for problem tractability, we first consider the simple nearest neighbor (NN) routing protocol, and analyze the end-to-end throughput so as to obtain a performance benchmark. Next, note that the ideal equal-distance routing is generally not realizable due to the randomness in relay distribution, we propose a quasi-equal-distance (QED) routing protocol. We derive the range for the optimal hop distance, and analyze the end-to-end throughput both with and without intra-route resource reuse. It is shown that the proposed QED routing protocol achieves a significant performance gain over NN routing. Finally, we consider the malicious link detection in multi-hop wireless sensor networks (WSNs), which is an important application of 5G multi-hop wireless networks. Existing work on malicious link detection generally requires that the detection process being performed at the intermediate nodes, leading to considerable overhead in system design, as well as unstable detection accuracy due to limited resources and the uncertainty in the loyalty of the intermediate nodes themselves. We propose an efficient and robust malicious link detection scheme by exploiting the statistics of packet delivery rates only at the base stations. More specifically, first, we present a secure packet transmission protocol to ensure that except the base stations, any intermediate nodes on the route cannot access the contents and routing paths of the packets. Second, we design a malicious link detection algorithm that can effectively detect the irregular dropout at every hop (or link) along the routing path with guaranteed false alarm rate and low miss detection rate.
Show less
- Title
- Using Eventual Consistency to Improve the Performance of Distributed Graph Computation In Key-Value Stores
- Creator
- Nguyen, Duong Ngoc
- Date
- 2021
- Collection
- Electronic Theses & Dissertations
- Description
-
Key-value stores have gained increasing popularity due to their fast performance and simple data model. A key-value store usually consists of multiple replicas located in different geographical regions to provide higher availability and fault tolerance. Consequently, a protocol is employed to ensure that data are consistent across the replicas.The CAP theorem states the impossibility of simultaneously achieving three desirable properties in a distributed system, namely consistency,...
Show moreKey-value stores have gained increasing popularity due to their fast performance and simple data model. A key-value store usually consists of multiple replicas located in different geographical regions to provide higher availability and fault tolerance. Consequently, a protocol is employed to ensure that data are consistent across the replicas.The CAP theorem states the impossibility of simultaneously achieving three desirable properties in a distributed system, namely consistency, availability, and network partition tolerance. Since failures are a norm in distributed systems and the capability to maintain the service at an acceptable level in the presence of failures is a critical dependability and business requirement of any system, the partition tolerance property is a necessity. Consequently, the trade-off between consistency and availability (performance) is inevitable. Strong consistency is attained at the cost of slow performance and fast performance is attained at the cost of weak consistency, resulting in a spectrum of consistency models suitable for different needs. Among the consistency models, sequential consistency and eventual consistency are two common ones. The former is easier to program with but suffers from poor performance whereas the latter suffers from potential data anomalies while providing higher performance.In this dissertation, we focus on the problem of what a designer should do if he/she is asked to solve a problem on a key-value store that provides eventual consistency. Specifically, we are interested in the approaches that allow the designer to run his/her applications on an eventually consistent key-value store and handle data anomalies if they occur during the computation. To that end, we investigate two options: (1) Using detect-rollback approach, and (2) Using stabilization approach. In the first option, the designer identifies a correctness predicate, say $\Phi$, and continues to run the application as if it was running on sequential consistency, as our system monitors $\Phi$. If $\Phi$ is violated (because the underlying key-value store provides eventual consistency), the system rolls back to a state where $\Phi$ holds and the computation is resumed from there. In the second option, the data anomalies are treated as state perturbations and handled by the convergence property of stabilizing algorithms.We choose LinkedIn's Voldemort key-value store as the example key-value store for our study. We run experiments with several graph-based applications on Amazon AWS platform to evaluate the benefits of the two approaches. From the experiment results, we observe that overall, both approaches provide benefits to the applications when compared to running the applications on sequential consistency. However, stabilization provides higher benefits, especially in the aggressive stabilization mode which trades more perturbations for no locking overhead.The results suggest that while there is some cost associated with making an algorithm stabilizing, there may be a substantial benefit in revising an existing algorithm for the problem at hand to make it stabilizing and reduce the overall runtime under eventual consistency.There are several directions of extension. For the detect-rollback approach, we are working to develop a more general rollback mechanism for the applications and improve the efficiency and accuracy of the monitors. For the stabilization approach, we are working to develop an analytical model for the benefits of eventual consistency in stabilizing programs. Our current work focuses on silent stabilization and we plan to extend our approach to other variations of stabilization.
Show less
- Title
- Signal processing and machine learning approaches to enabling advanced sensing and networking capabilities in everyday infrastructure and electronics
- Creator
- Ali, Kamran (Scientist)
- Date
- 2020
- Collection
- Electronic Theses & Dissertations
- Description
-
Mainstream commercial off-the-shelf (COTS) electronic devices of daily use are usually designed and manufactured to serve a very specific purpose. For example, the WiFi routers and network interface cards (NICs) are designed for high speed wireless communication, RFID readers and tags are designed to identify and track items in supply chain, and smartphone vibrator motors are designed to provide haptic feedback (e.g. notifications in silent mode) to the users. This dissertation focuses on...
Show moreMainstream commercial off-the-shelf (COTS) electronic devices of daily use are usually designed and manufactured to serve a very specific purpose. For example, the WiFi routers and network interface cards (NICs) are designed for high speed wireless communication, RFID readers and tags are designed to identify and track items in supply chain, and smartphone vibrator motors are designed to provide haptic feedback (e.g. notifications in silent mode) to the users. This dissertation focuses on revisiting the physical-layer of various such everyday COTS electronic devices, either to leverage the signals obtained from their physical layers to develop novel sensing applications, or to modify/improve their PHY/MAC layer protocols to enable even more useful deployment scenarios and networking applications - while keeping their original purpose intact - by introducing mere software/firmware level changes and completely avoiding any hardware level changes. Adding such new usefulness and functionalities to existing everyday infrastructure and electronics has advantages both in terms of cost and convenience of use/deployment, as those devices (and their protocols) are already mainstream, easily available, and often already purchased and in use/deployed to serve their mainstream purpose of use.In our works on WiFi signals based sensing, we propose signal processing and machine learning approaches to enable fine-grained gesture recognition and sleep monitoring using COTS WiFi devices. In our work on gesture recognition, we show for the first time thatWiFi signals can be used to recognize small gestures with high accuracy. In our work on sleep monitoring, we propose for the first time aWiFi CSI based sleep quality monitoring scheme which can robustly track breathing and body/limb activity related vital signs during sleep throughout a night in an individual and environment independent manner.In our work on RFID signals based sensing, we propose signal processing and machine learning approaches to effectively image customer activity in front of display items in places such as retail stores using commercial off-the-shelf (COTS) monostatic RFID devices (i.e. which use a single antenna at a time for both transmitting and receiving RFID signals to and from the tags). The key novelty of this work is on achieving multi-person activity tracking in front of display items by constructing coarse grained images via robust, analytical model-driven deep learning based, RFID imaging. We implemented our scheme using a COTS RFID reader and tags.In our work on smartphone's vibration based sensing, we propose a robust and practical vibration based sensing scheme that works with smartphones with different hardware, can extract fine-grained vibration signatures of different surfaces, and is robust to environmental noise and hardware based irregularities. A useful application of this sensing is symbolic localization/tagging, e.g. figuring out whether a user's device is in their hand, pocket, or at their bedroom table, etc. Such symbolic tagging of locations can provide us with indirect information about user activities and intentions without any dedicated infrastructure, based on which we can enable useful services such as context aware notifications/alarms. To make our scheme easily scalable and compatible with COTS smartphones, we design our signal processing and machine learning pipeline such that it relies only on builtin vibration motors and microphone for sensing, and it is robust to hardware irregularities and background environmental noises. We tested our scheme on two different Android smartphones.In our work on powerline communications (PLCs), we propose a distributed spectrum sharing scheme for enterprise level PLC mesh networks. This work is a major step towards using existing COTS PLC devices to connect different types of Internet of Things (IoT) devices for sensing and control related applications in large campuses such as enterprises. Our work is based on identification of a key weakness of the existing HomePlug AV (HPAV) PLC protocol that it does not support spectrum sharing, i.e., currently each link operates over the whole available spectrum, and therefore, only one link can operate at a time. Our proposed spectrum sharing scheme significantly boosts both aggregated and per-link throughputs, by allowing multiple links to communicate concurrently, while requiring a few modifications to the existing HPAV protocol.
Show less
- Title
- Distance-preserving graphs
- Creator
- Nussbaum, Ronald
- Date
- 2014
- Collection
- Electronic Theses & Dissertations
- Description
-
Let G be a simple graph on n vertices, where d_G(u,v) denotes the distance between vertices u and v in G. An induced subgraph H of G is isometric if d_H(u,v)=d_G(u,v) for all u,v in V(H). We say that G is a distance-preserving graph if G contains at least one isometric subgraph of order k for every k where 1<=k<=n.A number of sufficient conditions exist for a graph to be distance-preserving. We show that all hypercubes and graphs with delta(G)>=2n/3-1 are distance-preserving. Towards this end...
Show moreLet G be a simple graph on n vertices, where d_G(u,v) denotes the distance between vertices u and v in G. An induced subgraph H of G is isometric if d_H(u,v)=d_G(u,v) for all u,v in V(H). We say that G is a distance-preserving graph if G contains at least one isometric subgraph of order k for every k where 1<=k<=n.A number of sufficient conditions exist for a graph to be distance-preserving. We show that all hypercubes and graphs with delta(G)>=2n/3-1 are distance-preserving. Towards this end, we carefully examine the role of "forbidden" subgraphs. We discuss our observations, and provide some conjectures which we computationally verified for small values of n. We say that a distance-preserving graph is sequentially distance-preserving if each subgraph in the set of isometric subgraphs is a superset of the previous one, and consider this special case as well.There are a number of questions involving the construction of distance-preserving graphs. We show that it is always possible to add an edge to a non-complete sequentially distance-preserving graph such that the augmented graph is still sequentially distance-preserving. We further conjecture that the same is true of all distance-preserving graphs. We discuss our observations on making non-distance-preserving graphs into distance preserving ones via adding edges. We show methods for constructing regular distance-preserving graphs, and consider constructing distance-preserving graphs for arbitrary degree sequences. As before, all conjectures here have been computationally verified for small values of n.
Show less
- Title
- Towards Robust and Reliable Communication for Millimeter Wave Networks
- Creator
- Zarifneshat, Masoud
- Date
- 2022
- Collection
- Electronic Theses & Dissertations
- Description
-
The future generations of wireless networks benefit significantly from millimeter wave technology (mmW) with frequencies ranging from about 30 GHz to 300 GHz. Specifically, the fifth generation of wireless networks has already implemented the mmW technology and the capacity requirements defined in 6G will also benefit from the mmW spectrum. Despite the attractions of the mmW technology, the mmW spectrum has some inherent propagation properties that introduce challenges. The first is that free...
Show moreThe future generations of wireless networks benefit significantly from millimeter wave technology (mmW) with frequencies ranging from about 30 GHz to 300 GHz. Specifically, the fifth generation of wireless networks has already implemented the mmW technology and the capacity requirements defined in 6G will also benefit from the mmW spectrum. Despite the attractions of the mmW technology, the mmW spectrum has some inherent propagation properties that introduce challenges. The first is that free space pathloss in mmW is more severe than that in the sub 6 GHz band. To make the mmW signal travel farther, communication systems need to use phased array antennas to concentrate the signal power to a limited direction in space at each given time. Directional communication can incur high overhead on the system because it needs to probe the space for finding signal paths. To have efficient communication in the mmW spectrum, the transmitter and the receiver should align their beams on strong signal paths which is a high overhead task. The second is a low diffraction of the mmW spectrum. The low diffraction causes almost any object including the human body to easily block the mmW signal degrading the mmW link quality. Avoiding and recovering from the blockage in the mmW communications, especially in dynamic environments, is particularly challenging because of the fast changes of the mmW channel. Due to the unique characteristics of the mmW propagation, the traditional user association methods perform poorly in the mmW spectrum. Therefore, we propose user association methods that consider the inherent propagation characteristics of the mmW signal. We first propose a method that collects the history of blockage incidents throughout the network and exploits the historical blockage incidents to associate user equipment to the base station with lower blockage possibility. The simulation results show that our proposed algorithm performs better in terms of improving the quality of the links and blockage rate in the network. User association based only on one objective may deteriorate other objectives. Therefore, we formulate a biobjective optimization problem to consider two objectives of load balance and blockage possibility in the network. We conduct Lagrangian dual analysis to decrease time complexity. The results show that our solution to the biobjective optimization problem has a better outcome compared to optimizing each objective alone. After we investigate the user association problem, we further look into the problem of maintaining a robust link between a transmitter and a receiver. The directional propagation of the mmW signal creates the opportunity to exploit multipath for a robust link. The main reasons for the link quality degradation are blockage and link movement. We devise a learning-based prediction framework to classify link blockage and link movement efficiently and quickly using diffraction values for taking appropriate mitigating actions. The simulations show that the prediction framework can predict blockage with close to 90% accuracy. The prediction framework will eliminate the need for time-consuming methods to discriminate between link movement and link blockage. After detecting the reason for the link degradation, the system needs to do the beam alignment on the updated mmW signal paths. The beam alignment on the signal paths is a high overhead task. We propose using signaling in another frequency band to discover the paths surrounding a receiver working in the mmW spectrum. In this way, the receiver does not have to do an expensive beam scan in the mmW band. Our experiments with off-the-shelf devices show that we can use a non-mmW frequency band's paths to align the beams in mmW frequency. In this dissertation, we provide solutions to the fundamental problems in mmW communication. We propose a user association method that is designed for mmW networks considering challenges of mmW signal. A closed-form solution for a biobjective optimization problem to optimize both blockage and load balance of the network is also provided. Moreover, we show that we can efficiently use the out-of-band signal to exploit multipath created in mmW communication. The future research direction includes investigating the methods proposed in this dissertation to solve some of the classic problems in the wireless networks that exist in the mmW spectrum.
Show less
- Title
- Hardware algorithms for high-speed packet processing
- Creator
- Norige, Eric
- Date
- 2017
- Collection
- Electronic Theses & Dissertations
- Description
-
The networking industry is facing enormous challenges of scaling devices to support theexponential growth of internet traffic as well as increasing number of features being implemented inside the network. Algorithmic hardware improvements to networking componentshave largely been neglected due to the ease of leveraging increased clock frequency and compute power and the risks of implementing complex hardware designs. As clock frequencyslows its growth, algorithmic solutions become important...
Show moreThe networking industry is facing enormous challenges of scaling devices to support theexponential growth of internet traffic as well as increasing number of features being implemented inside the network. Algorithmic hardware improvements to networking componentshave largely been neglected due to the ease of leveraging increased clock frequency and compute power and the risks of implementing complex hardware designs. As clock frequencyslows its growth, algorithmic solutions become important to fill the gap between currentgeneration capability and next generation requirements. This paper presents algorithmicsolutions to networking problems in three domains: Deep Packet Inspection(DPI), firewall(and other) ruleset compression and non-cryptographic hashing. The improvements in DPIare two-pronged: first in the area of application-level protocol field extraction, which allowssecurity devices to precisely identify packet fields for targeted validity checks. By usingcounting automata, we achieve precise parsing of non-regular protocols with small, constantper-flow memory requirements, extracting at rates of up to 30gbps on real traffic in softwarewhile using only 112 bytes of state per flow. The second DPI improvement is on the longstanding regular expression matching problem, where we complete the HFA solution to theDFA state explosion problem with efficient construction algorithms and optimized memorylayout for hardware or software implementation. These methods construct automata toocomplex to be constructed by previous methods in seconds, while being capable of 29gbpsthroughput with an ASIC implementation. Firewall ruleset compression enables more firewall entries to be stored in a fixed capacity pattern matching engine, and can also be usedto reorganize a firewall specification for higher performance software matching. A novelrecursive structure called TUF is given to unify the best known solutions to this problemand suggest future avenues of attack. These algorithms, with little tuning, achieve a 13.7%improvement in compression on large, real-life classifiers, and can achieve the same results asexisting algorithms while running 20 times faster. Finally, non-cryptographic hash functionscan be used for anything from hash tables to track network flows to packet sampling fortraffic characterization. We give a novel approach to generating hardware hash functionsin between the extremes of expensive cryptographic hash functions and low quality linearhash functions. To evaluate these mid-range hash functions properly, we develop new evaluation methods to better distinguish non-cryptographic hash function quality. The hashfunctions described in this paper achieve low-latency, wide hashing with good avalanche anduniversality properties at a much lower cost than existing solutions.
Show less
- Title
- Achieving reliable distributed systems : through efficient run-time monitoring and predicate detection
- Creator
- Tekken Valapil, Vidhya
- Date
- 2020
- Collection
- Electronic Theses & Dissertations
- Description
-
Runtime monitoring of distributed systems to perform predicate detection is critical as well as a challenging task. It is critical because it ensures the reliability of the system by detecting all possible violations of system requirements. It is challenging because to guarantee lack of violations one has to analyze every possible ordering of system events and this is an expensive task. In this report, wefocus on ordering events in a system run using HLC (Hybrid Logical Clocks) timestamps,...
Show moreRuntime monitoring of distributed systems to perform predicate detection is critical as well as a challenging task. It is critical because it ensures the reliability of the system by detecting all possible violations of system requirements. It is challenging because to guarantee lack of violations one has to analyze every possible ordering of system events and this is an expensive task. In this report, wefocus on ordering events in a system run using HLC (Hybrid Logical Clocks) timestamps, which are O(1) sized timestamps, and present some efficient algorithms to perform predicate detection using HLC. Since, with HLC, the runtime monitor cannot find all possible orderings of systems events, we present a new type of clock called Biased Hybrid Logical Clocks (BHLC), that are capable of finding more possible orderings than HLC. Thus we show that BHLC based predicate detection can find more violations than HLC based predicate detection. Since predicate detection based on both HLC and BHLC do not guarantee detection of all possible violations in a system run, we present an SMT (Satisfiability Modulo Theories) solver based predicate detection approach, that guarantees the detection of all possible violations in a system run. While a runtime monitor that performs predicate detection using SMT solvers is accurate, the time taken by the solver to detect the presence or absence of a violation can be high. To reduce the time taken by the runtime monitor, we propose the use of an efficient two-layered monitoring approach, where the first layer of the monitor is efficient but less accurate and the second layer is accurate but less efficient. Together they reduce the overall time taken to perform predicate detection drastically and also guarantee detection of all possible violations.
Show less
- Title
- Novel computational approaches to investigate microbial diversity
- Creator
- Zhang, Qingpeng
- Date
- 2015
- Collection
- Electronic Theses & Dissertations
- Description
-
Species diversity is an important measurement of ecological communities.Scientists believe that there is a strong relationship between speciesdiversity and ecosystem processes. However efforts to investigate microbialdiversity using whole genome shotgun reads data are still scarce. With novel applications of data structuresand the development of novel algorithms, firstly we developed an efficient k-mer countingapproach and approaches to enable scalable streaming analysis of large and error...
Show moreSpecies diversity is an important measurement of ecological communities.Scientists believe that there is a strong relationship between speciesdiversity and ecosystem processes. However efforts to investigate microbialdiversity using whole genome shotgun reads data are still scarce. With novel applications of data structuresand the development of novel algorithms, firstly we developed an efficient k-mer countingapproach and approaches to enable scalable streaming analysis of large and error-prone short-read shotgun data sets. Then based on these efforts, we developed a statistical framework allowing for scalable diversity analysis of large,complex metagenomes without the need for assembly or reference sequences. Thismethod is evaluated on multiple large metagenomes from differentenvironments, such as seawater, human microbiome, soil. Given the velocity ingrowth of sequencing data, this method is promising for analyzing highlydiverse samples with relatively low computational requirements. Further, as themethod does not depend on reference genomes, it also provides opportunities totackle the large amounts of unknowns we find in metagenomicdatasets.
Show less
- Title
- On design and implementation of fast & secure network protocols for datacenters
- Creator
- Munir, Ali (Graduate of Michigan State University)
- Date
- 2019
- Collection
- Electronic Theses & Dissertations
- Description
-
My PhD work focuses on improving the performance and security of networked systems. For network performance, my research focuses on scheduling and transport in datacenter networks. For network security, my research focuses on multipath TCP security.To improve the performance of datacenter transport, I proposed PASE, a near-optimal and deployment friendly transport protocol. To this end, I first identified the underlying strategies used by existing datacenter transports. Next, I showed that...
Show moreMy PhD work focuses on improving the performance and security of networked systems. For network performance, my research focuses on scheduling and transport in datacenter networks. For network security, my research focuses on multipath TCP security.To improve the performance of datacenter transport, I proposed PASE, a near-optimal and deployment friendly transport protocol. To this end, I first identified the underlying strategies used by existing datacenter transports. Next, I showed that these strategies are complimentary to each other, rather than substitutes, as they have different strengths and can address each other's limitations. Unfortunately, prior datacenter transports use only one of these strategies and as a result they either achieve near-optimal performance or deployment friendliness but not both. Based on this insight, I designed a datacenter transport protocol called PASE, which carefully synthesizes these strategies by assigning different transport responsibility to each strategy. To further improve the performance of datacenter transport in multi-tenant networks, I proposed Stacked Congestion Control (SCC), to achieve performance isolation and objective scheduling simultaneously. SCC is a distributed host-based bandwidth allocation framework, where an underlay congestion control layer handles contention among tenants, and a private congestion control layer for each tenant optimizes its performance objective. To my best knowledge, no prior work supported performance isolation and objective scheduling simultaneously.To improve task scheduling performance in datacenters, I proposed NEAT, a task scheduling framework that leverages information from the underlying network scheduler to make task placement decisions. Existing datacenter schedulers optimize either the placement of tasks or the scheduling of network flows. Inconsistent assumptions of the two schedulers can compromise the overall application performance. The core of NEAT is a task completion time predictor that estimates the completion time of a task under given network condition and a given network scheduling policy. Next, a distributed task placement framework leverages the predicted task completion times to make task placement decisions and minimize the average completion time of active tasks.To improve multipath TCP (MPTCP) security, I reported vulnerabilities in MPTCP that arise because of cross-path interactions between MPTCP subflows. MPTCP allows two endpoints to simultaneously use multiple paths between them. An attacker eavesdropping one MPTCP subflow can infer throughput of other subflows and also can inject forged MPTCP packets to change priorities of any MPTCP subflow. Attacker can exploit these vulnerabilities to launch the connection hijack attack on the paths he has no access to, or to divert traffic from one path to other paths. My proposed vulnerabilities fixes, changes to MPTCP specification, provide the guarantees that MPTCP is at least as secure as TCP and the original MPTCP. And has been adopted by IETF.
Show less
- Title
- Capturing bluetooth traffic in the wild : practical systems and privacy implications
- Creator
- Albazrqaoe, Wahhab
- Date
- 2018
- Collection
- Electronic Theses & Dissertations
- Description
-
"Bluetooth wireless technology is today present in billions of smartphones, mobile devices, and portable electronics. With the prevalence of personal Bluetooth devices, a practical Bluetooth traffic sniffer is of increasing interest due to the following. First, it has been reported that a traffic sniffer is an essential, day-to-day tool for Bluetooth engineers and applications developers [4] [14]; and second, as the communication between Bluetooth devices is privacy-sensitive in nature,...
Show more"Bluetooth wireless technology is today present in billions of smartphones, mobile devices, and portable electronics. With the prevalence of personal Bluetooth devices, a practical Bluetooth traffic sniffer is of increasing interest due to the following. First, it has been reported that a traffic sniffer is an essential, day-to-day tool for Bluetooth engineers and applications developers [4] [14]; and second, as the communication between Bluetooth devices is privacy-sensitive in nature, exploring the possibility of Bluetooth traffic sniffing in practical settings sheds lights into potential user privacy leakage. To date, sniffing Bluetooth traffic has been widely considered an extremely intricate task due to wideband spread spectrum of Bluetooth, pseudo-random frequency hopping adopted by Bluetooth at baseband, and the interference in the open 2.4 GHz band. This thesis addresses these challenges by introducing novel traffic sniffers that capture Bluetooth packets in practical environments. In particular, we present the following systems. (i) BlueEar, the first practical Bluetooth traffic sniffing system only using general, inexpensive wireless platforms. BlueEar features a novel dual-radio architecture where two inexpensive, Bluetooth-compliant radios coordinate with each other to eavesdrop on hopping subchannels in indiscoverable mode. Statistic models and lightweight machine learning tools are integrated to learn the adaptive hopping behavior of the target. Our results show that BlueEar maintains a packet capture rate higher than 90% consistently in dynamic settings. In addition, we discuss the implications of the BlueEar approach on Bluetooth LE sniffing and present a practical countermeasure that effectively reduces the packet capture rate of sniffer by 70%, which can be easily implemented on the Bluetooth master while requiring no modification to slave devices like keyboards and headsets. And (ii) BlueFunnel, the first low-power, wideband traffic sniffer that monitors Bluetooth spectrum in parallel and captures packet in realtime. BlueFunnel tackles the challenge of wideband spread spectrum based on low speed, low cost ADC (2 Msamples/sec) to subsample Bluetooth spectrum. Further, it leverages a suite of novel signal processing algorithms to demodulate Bluetooth signal in realtime. We implement BlueFunnel prototype based on USRP2 devices. Specifically, we employ two USRR2 devices, each is equipped with SBX daughterboard, to build a customized software radio platform. The customized SDR platform is interfaced to the controller, which implements the digital signal processing algorithms on a personal laptop. We evaluate the system performance based on packet capture rates in a variety of interference conditions, mainly introduce by the 802.11-based WLANs. BlueFunnel maintains good levels of packet capture rates in all settings. Further, we introduce two scenarios of attacks against Bluetooth, where BlueFunnel successfully reveals sensitive information about the target link."--Pages ii-iii.
Show less
- Title
- Measurement and modeling of large scale networks
- Creator
- Shafiq, Muhammad Zubair
- Date
- 2014
- Collection
- Electronic Theses & Dissertations
- Description
-
The goal of this thesis is to identify measurement, modeling, and optimization opportunities for large scale networks -- with specific focus on cellular networks and online social networks. These networks are facing unprecedented operational challenges due to their very large scale.Cellular networks are experiencing an explosive increase in the volume of traffic for the last few years. This unprecedented increase in the volume of mobile traffic is attributed to the increase in the subscriber...
Show moreThe goal of this thesis is to identify measurement, modeling, and optimization opportunities for large scale networks -- with specific focus on cellular networks and online social networks. These networks are facing unprecedented operational challenges due to their very large scale.Cellular networks are experiencing an explosive increase in the volume of traffic for the last few years. This unprecedented increase in the volume of mobile traffic is attributed to the increase in the subscriber base, improving network connection speeds, and improving hardware and software capabilities of modern smartphones. In contrast to the traditional fixed IP networks, mobile network operators are faced with the constraint of limited radio frequency spectrum at their disposal. As the communication technologies evolve beyond 3G to Long Term Evolution (LTE), the competition for the limited radio frequency spectrum is becoming even more intense. Therefore, mobile network operators increasingly focus on optimizing different aspects of the network by customized design and management to improve key performance indicators (KPIs).Online social networks are increasing at a very rapid pace, while trying to provide more content-rich and interactive services to their users. For instance, Facebook currently has more than 1.2 billion monthly active users and offers news feed, graph search, groups, photo sharing, and messaging services. The information for such a large user base cannot be efficiently and securely managed by traditional database systems. Social network service providers are deploying novel large scale infrastructure to cope with these scaling challenges.In this thesis, I present novel approaches to tackle these challenges by revisiting the current practices for the design, deployment, and management of large scale network systems using a combination of theoretical and empirical methods. I take a data-driven approach in which the theoretical and empirical analyses are intertwined. First, I measure and analyze the trends in data and then model the identified trends using suitable parametric models. Finally, I rigorously evaluate the developed models and the resulting system design prototypes using extensive simulations, realistic testbed environments, or real-world deployment. This methodology is to used to address several problems related to cellular networks and online social networks.
Show less
- Title
- Near duplicate image search
- Creator
- Li, Fengjie
- Date
- 2014
- Collection
- Electronic Theses & Dissertations
- Description
-
Information retrieval addresses the fundamental problem of how to identify the objects from database that satisfies the information needs of users. Facing the information overload, the major challenge in search algorithm design is to ensure that useful information can be found both accurately and efficiently from large databases.To address this challenge, different indexing and retrieval methods had been proposed for different types of data, namely sparse data (e.g. documents), dense data (e...
Show moreInformation retrieval addresses the fundamental problem of how to identify the objects from database that satisfies the information needs of users. Facing the information overload, the major challenge in search algorithm design is to ensure that useful information can be found both accurately and efficiently from large databases.To address this challenge, different indexing and retrieval methods had been proposed for different types of data, namely sparse data (e.g. documents), dense data (e.g. dense feature vectors) and bag-of-features (e.g. local feature represented images). For sparse data, inverted index and document retrieval models had been proved to be very effective for large scale retrieval problems. For dense data and bag-of-feature data, however, there are still some open problems. For example, Locality Sensitive Hashing, a state-of-the-art method for searching high dimensional vectors, often fails to make a good tradeoff between precision and recall. Namely, it tends to achieve high preci- sion but with low recall or vice versa. The bag-of-words model, a popular approach for searching objects represented bag-of-features, has a limited performance because of the information loss during the quantization procedure.Since the general problem of searching objects represented in dense vectors and bag-of-features may be too challenging, in this dissertation, we focus on nearly duplicate search, in which the matched objects is almost identical to the query. By effectively exploring the statistical proper- ties of near duplicities, we will be able to design more effective indexing schemes and search algorithms. Thus, the focus of this dissertation is to design new indexing methods and retrieval algorithms, for near duplicate search in large scale databases, that accurately capture the data simi- larity and delivers more accurate and efficient search. Below, we summarize the main contributions of this dissertation:Our first contribution is a new algorithm for searching near duplicate bag-of-features data. The proposed algorithm, named random seeding quantization, is more efficient in generating bag-of- words representations for near duplicate images. The new scheme is motivated by approximating the optimal partial matching between bag-of-features, and thus produces a bag-of-words representation capturing the true similarities of the data, leading to more accurate and efficient retrieval of bag-of-features data.Our second contribution, termed Random Projection Filtering, is a search algorithm designed for efficient near duplicate vector search. By explicitly exploiting the statistical properties of near duplicity, the algorithm projects high dimensional vectors into lower dimensional space and filter out irrelevant items. Our effective filtering procedure makes RPF more accurate and efficient to identify nearly duplicate objects in databases.Our third contribution is to develop and evaluate a new randomized range search algorithm for near duplicate vectors in high dimensional spaces, termed as Random Projection Search. Different from RPF, the algorithm presented in this chapter is suitable for a wider range of applications be- cause it does not require the sparsity constrains for high search accuracy. The key idea is to project both the data points and the query point into an one dimensional space by a random projection, and perform one dimensional range search to find the subset of data points that are within the range of a given query using binary search. We prove the theoretical guarantee for the proposed algorithm and evaluate its empirical performance on a dataset of 1.1 billion image features.
Show less
- Title
- The evolutionary potential of populations on complex fitness landscapes
- Creator
- Bryson, David Michael
- Date
- 2012
- Collection
- Electronic Theses & Dissertations
- Description
-
Evolution is a highly contingent process, where the quality of the solutions produced is affected by many factors. I explore and describe the contributions of three such aspects that influence overall evolutionary potential: the prior history of a population, the type and frequency of mutations that the organisms are subject to, and the composition of the underlying genetic hardware. I have systematically tested changes to a digital evolution system, Avida, measuring evolutionary potential in...
Show moreEvolution is a highly contingent process, where the quality of the solutions produced is affected by many factors. I explore and describe the contributions of three such aspects that influence overall evolutionary potential: the prior history of a population, the type and frequency of mutations that the organisms are subject to, and the composition of the underlying genetic hardware. I have systematically tested changes to a digital evolution system, Avida, measuring evolutionary potential in seven different computational environments ranging in complexity of the underlying fitness landscapes. I have examined trends and general principles that these measurements demonstrate and used my results to optimize the evolutionary potential of the system, broadly enhancing performance. The results of this work show that history and mutation rate play significant roles in evolutionary potential, but the final fitness levels of populations are remarkably stable to substantial changes in the genetic hardware and a broad range of mutation types.
Show less
- Title
- Scheduling for CPU Packing and node shutdown to reduce the energy consumption of high performance computing centers
- Creator
- Vudayagiri, Srikanth Phani
- Date
- 2010
- Collection
- Electronic Theses & Dissertations
- Description
-
During the past decade, there has been a tremendous growth in the high performance computing and data center arenas. The huge energy requirements in these sectors have prompted researchers to investigate possible ways to reduce their energy consumption. Reducing the energy consumption is not only beneficial to an organization economically but also to the environment. In this thesis, we focus our attention on high performance scientific computing clusters. We first perform experiments with the...
Show moreDuring the past decade, there has been a tremendous growth in the high performance computing and data center arenas. The huge energy requirements in these sectors have prompted researchers to investigate possible ways to reduce their energy consumption. Reducing the energy consumption is not only beneficial to an organization economically but also to the environment. In this thesis, we focus our attention on high performance scientific computing clusters. We first perform experiments with the CPU Packing feature available in Linux using programs from the SPEC CPU2000 suite. We then look at an energy-aware scheduling algorithm for the cluster that assumes that CPU Packing is enabled on all the nodes. Using simulations, we compare the scheduling done by this algorithm to that done by the existing, commercial Moab scheduler in the cluster. We experiment with the Moab Green Computing feature and based on our observations, we implement the shutdown mechanism used by Moab in our simulations. Our results show that Moab Green Computing could provide about an 13% energy savings on average for the HPC cluster without any noticeable decrease in the performance of jobs.
Show less
- Title
- Statistical and learning algorithms for the design, analysis, measurement, and modeling of networking and security systems
- Creator
- Shahzad, Muhammad (College teacher)
- Date
- 2015
- Collection
- Electronic Theses & Dissertations
- Description
-
"The goal of this thesis is to develop statistical and learning algorithms for the design, analysis, measurement, and modeling of networking and security systems with specific focus on RFID systems, network performance metrics, user security, and software security. Next, I give a brief overview of these four areas of focus." -- Abstract.