You are here
Search results
(1  4 of 4)
 Title
 Hardware algorithms for highspeed packet processing
 Creator
 Norige, Eric
 Date
 2017
 Collection
 Electronic Theses & Dissertations
 Description

The networking industry is facing enormous challenges of scaling devices to support theexponential growth of internet traffic as well as increasing number of features being implemented inside the network. Algorithmic hardware improvements to networking componentshave largely been neglected due to the ease of leveraging increased clock frequency and compute power and the risks of implementing complex hardware designs. As clock frequencyslows its growth, algorithmic solutions become important...
Show moreThe networking industry is facing enormous challenges of scaling devices to support theexponential growth of internet traffic as well as increasing number of features being implemented inside the network. Algorithmic hardware improvements to networking componentshave largely been neglected due to the ease of leveraging increased clock frequency and compute power and the risks of implementing complex hardware designs. As clock frequencyslows its growth, algorithmic solutions become important to fill the gap between currentgeneration capability and next generation requirements. This paper presents algorithmicsolutions to networking problems in three domains: Deep Packet Inspection(DPI), firewall(and other) ruleset compression and noncryptographic hashing. The improvements in DPIare twopronged: first in the area of applicationlevel protocol field extraction, which allowssecurity devices to precisely identify packet fields for targeted validity checks. By usingcounting automata, we achieve precise parsing of nonregular protocols with small, constantperflow memory requirements, extracting at rates of up to 30gbps on real traffic in softwarewhile using only 112 bytes of state per flow. The second DPI improvement is on the longstanding regular expression matching problem, where we complete the HFA solution to theDFA state explosion problem with efficient construction algorithms and optimized memorylayout for hardware or software implementation. These methods construct automata toocomplex to be constructed by previous methods in seconds, while being capable of 29gbpsthroughput with an ASIC implementation. Firewall ruleset compression enables more firewall entries to be stored in a fixed capacity pattern matching engine, and can also be usedto reorganize a firewall specification for higher performance software matching. A novelrecursive structure called TUF is given to unify the best known solutions to this problemand suggest future avenues of attack. These algorithms, with little tuning, achieve a 13.7%improvement in compression on large, reallife classifiers, and can achieve the same results asexisting algorithms while running 20 times faster. Finally, noncryptographic hash functionscan be used for anything from hash tables to track network flows to packet sampling fortraffic characterization. We give a novel approach to generating hardware hash functionsin between the extremes of expensive cryptographic hash functions and low quality linearhash functions. To evaluate these midrange hash functions properly, we develop new evaluation methods to better distinguish noncryptographic hash function quality. The hashfunctions described in this paper achieve lowlatency, wide hashing with good avalanche anduniversality properties at a much lower cost than existing solutions.
Show less
 Title
 MEASURING AND MODELING THE EFFECTS OF SEA LEVEL RISE ON NEARCOASTAL RIVERINE REGIONS : A GEOSPATIAL COMPARISON OF THE SHATT ALARAB RIVER IN SOUTHERN IRAQ WITH THE MISSISSIPPI RIVER DELTA IN SOUTHERN LOUISIANA, USA.
 Creator
 Kadhim, Ameen Awad
 Date
 2018
 Collection
 Electronic Theses & Dissertations
 Description

There is a growing debate among scientists on how sea level rise (SLR) will impact coastal environments, particularly in countries where economic activities are sustained along these coasts. An important factor in this debate is how best to characterize coastal environmental impacts over time. This study investigates the measurement and modeling of SLR and effects on nearcoastal riverine regions. The study uses a variety of data sources, including satellite imagery from 1975 to 2017, digital...
Show moreThere is a growing debate among scientists on how sea level rise (SLR) will impact coastal environments, particularly in countries where economic activities are sustained along these coasts. An important factor in this debate is how best to characterize coastal environmental impacts over time. This study investigates the measurement and modeling of SLR and effects on nearcoastal riverine regions. The study uses a variety of data sources, including satellite imagery from 1975 to 2017, digital elevation data and previous studies. This research is focusing on two of these important regions: southern Iraq along the Shatt AlArab River (SAR) and the southern United States in Louisiana along the Mississippi River Delta (MRD). These sites are important for both their extensive lowlying land and for their significant coastal economic activities. The dissertation consists of six chapters. Chapter one introduces the topic. Chapter two compares and contrasts bothregions and evaluates escalating SLR risk. Chapter three develops a coupled human and natural system (CHANS) perspective for SARR to reveal multiple sources of environmental degradation in this region. Alfa century ago SARR was an important and productive region in Iraq that produced fruits like dates, crops, vegetables, and fish. By 1975 the environment of this region began to deteriorate, and since then, it is welldocumented that SARR has suffered under human and natural problems. In this chapter, I use the CHANS perspective to identify the problems, and which ones (human or natural systems) are especially responsible for environmental degradation in SARR. I use several measures of ecological, economic, and social systems to outline the problems identified through the CHANS framework. SARR has experienced extreme weather changes from 1975 to 2017 resulting in lower precipitation (17mm) and humidity (5.6%), higher temperatures (1.6 C), and sea level rise, which are affecting the salinity of groundwater and Shatt Al Arab river water. At the same time, human systems in SARR experienced many problems including eight years of war between Iraq and Iran, the first Gulf War, UN Security Council imposed sanctions against Iraq, and the second Gulf War. I modeled and analyzed the regions land cover between 1975 and 2017 to understand how the environment has been affected, and found that climate change is responsible for what happened in this region based on other factors. Chapter four constructs and applies an error propagation model to elevation data in the Mississippi River Delta region (MRDR). This modeling both reduces and accounts for the effects of digital elevation model (DEM) error on a bathtub inundation model used to predict the SLR risk in the region. Digital elevation data is essential to estimate coastal vulnerability to flooding due to sea level rise. Shuttle Radar Topography Mission (SRTM) 1 ArcSecond Global is considered the best free global digital elevation data available. However, inundation estimates from SRTM are subject to uncertainty due to inaccuracies in the elevation data. Small systematic errors in low, flat areas can generate large errors in inundation models, and SRTM is subject to positive bias in the presence of vegetation canopy, such as along channels and within marshes. In this study, I conduct an error assessment and develop statistical error modeling for SRTM to improve the quality of elevation data in these atrisk regions. Chapter five applies MRDRbased model from chapter four to enhance the SRTM 1 ArcSecond Global DEM data in SARR. As such, it is the first study to account for data uncertainty in the evaluation of SLR risk in this sensitive region. This study transfers an error propagation model from MRDR to the Shatt alArab river region to understand the impact of DEM error on an inundation model in this sensitive region. The error propagation model involves three stages. First, a multiple regression model, parameterized from MRDR, is used to generate an expected DEM error surface for SARR. This surface is subtracted from the SRTM DEM for SARR to adjust it. Second, residuals from this model are simulated for SARR: these are meanzero and spatially autocorrelated with a Gaussian covariance model matching that observed in MRDR by convolution filtering of random noise. More than 50 realizations of error were simulated to make sure a stable result was realized. These realizations were subtracted from the adjusted SRTM to produce DEM realizations capturing potential variation. Third, the DEM realizations are each used in bathtub modeling to estimate flooding area in the region with 1 m of sea level rise. The distribution of flooding estimates shows the impact of DEM error on uncertainty in inundation likelihood, and on the magnitude of total flooding. Using the adjusted DEM realizations 47 ± 2 percent of the region is predicted to flood, while using the raw SRTM DEM only 28% of the region is predicted to flood.
Show less
 Title
 Safe Control Design for Uncertain Systems
 Creator
 Marvi, Zahra
 Date
 2021
 Collection
 Electronic Theses & Dissertations
 Description

This dissertation investigates the problem of safe control design for systems under model and environmental uncertainty. Reinforcement learning (RL) provides an interactive learning framework in which the optimal controller is sequentially derived based on instantaneous reward. Although powerful, safety consideration is a barrier to the wide deployment of RL algorithms in practice. To overcome this problem, we proposed an iterative safe offpolicy RL algorithm. The cost function that encodes...
Show moreThis dissertation investigates the problem of safe control design for systems under model and environmental uncertainty. Reinforcement learning (RL) provides an interactive learning framework in which the optimal controller is sequentially derived based on instantaneous reward. Although powerful, safety consideration is a barrier to the wide deployment of RL algorithms in practice. To overcome this problem, we proposed an iterative safe offpolicy RL algorithm. The cost function that encodes the designer's objectives is augmented with a control barrier function (CBF) to ensure safety and optimality. The proposed formulation provides a lookahead and proactive safety planning, in which the safety is planned and optimized along with the performance to minimize the intervention with the optimal controller. Extensive safety and stability analysis is provided and the proposed method is implemented using the offpolicy algorithm without requiring complete knowledge about the system dynamics. This line of research is then further extended to have a safety and stability guarantee even during the data collection and exploration phases in which random noisy inputs are applied to the system. However, satisfying the safety of actions when little is known about the system dynamics is a daunting challenge. We present a novel RL scheme that ensures the safety and stability of the linear systems during the exploration and exploitation phases. This is obtained by having a concurrent model learning and control, in which an efficient learning scheme is employed to prescribe the learning behavior. This characteristic is then employed to apply only safe and stabilizing controllers to the system. First, the prescribed errors are employed in a novel adaptive robustified control barrier function (ARCBF) which guarantees that the states of the system remain in the safe set even when the learning is incomplete. Therefore, the noisy input in the exploratory data collection phase and the optimal controller in the exploitation phase are minimally altered such that the ARCBF criterion is satisfied and, therefore, safety is guaranteed in both phases. It is shown that under the proposed prescribed RL framework, the model learning error is a vanishing perturbation to the original system. Therefore, a stability guarantee is also provided even in the exploration when noisy random inputs are applied to the system. A learningenabled barriercertified safe controllers for systems that operate in a shared and uncertain environment is then presented. A safetyaware loss function is defined and minimized to learn the uncertain and unknown behavior of external agents that affect the safety of the system. The loss function is defined based on safe set error, instead of the system model error, and is minimized for both current samples as well as past samples stored in the memory to assure a fast and generalizable learning algorithm for approximating the safe set. The proposed model learning and CBF are then integrated together to form a learningenabled zeroing CBF (LZCBF), which employs the approximated trajectory information of the external agents provided by the learned model but shrinks the safety boundary in case of an imminent safety violation using instantaneous sensory observations. It is shown that the proposed LZCBF assures the safety guarantees during learning and even in the face of inaccurate or simplified approximation of external agents, which is crucial in highly interactive environments. Finally, the cooperative capability of agents in a multiagent environment is investigated for the sake of safety guarantee. CBFs and informationgap theory are integrated to have robust safe controllers for multiagent systems with different levels of measurement accuracy. A cooperative framework for the construction of CBFs for every two agents is employed to maximize the horizon of uncertainty under which the safety of the overall system is satisfied. The informationgap theory is leveraged to determine the contribution and share of each agent in the construction of CBFs. This results in the highest possible robustness against measurement uncertainty. By employing the proposed approach in constructing CBF, a higher horizon of uncertainty can be safely tolerated and even the failure of one agent in gathering accurate local data can be compensated by cooperation between agents. The effectiveness of the proposed methods is extensively examined in simulation results.
Show less
 Title
 TENSOR LEARNING WITH STRUCTURE, GEOMETRY AND MULTIMODALITY
 Creator
 Sofuoglu, Seyyid Emre
 Date
 2022
 Collection
 Electronic Theses & Dissertations
 Description

With the advances in sensing and data acquisition technology, it is now possible to collect datafrom different modalities and sources simultaneously. Most of these data are multidimensional in nature and can be represented by multiway arrays known as tensors. For instance, a color image is a thirdorder tensor defined by two indices for spatial variables and one index for color mode. Some other examples include color video, medical imaging such as EEG and fMRI, spatiotemporal data...
Show moreWith the advances in sensing and data acquisition technology, it is now possible to collect datafrom different modalities and sources simultaneously. Most of these data are multidimensional in nature and can be represented by multiway arrays known as tensors. For instance, a color image is a thirdorder tensor defined by two indices for spatial variables and one index for color mode. Some other examples include color video, medical imaging such as EEG and fMRI, spatiotemporal data encountered in urban traffic monitoring, etc.In the past two decades, tensors have become ubiquitous in signal processing, statistics andcomputer science. Traditional unsupervised and supervised learning methods developed for one dimensional signals do not translate well to higher order data structures as they get computationally prohibitive with increasing dimensionalities. Vectorizing high dimensional inputs creates problems in nearly all machine learning tasks due to exponentially increasing dimensionality, distortion of data structure and the difficulty of obtaining sufficiently large training sample size.In this thesis, we develop tensorbased approaches to various machine learning tasks. Existingtensor based unsupervised and supervised learning algorithms extend many wellknown algorithms, e.g. 2D component analysis, support vector machines and linear discriminant analysis, with better performance and lower computational and memory costs. Most of these methods rely on Tucker decomposition which has exponential storage complexity requirements; CANDECOMPPARAFAC (CP) based methods which might not have a solution; or Tensor Train (TT) based solutions which suffer from exponentially increasing ranks. Many tensor based methods have quadratic (w.r.t the size of data), or higher computational complexity, and similarly, high memory complexity. Moreover, existing tensor based methods are not always designed with the particular structure of the data in mind. Many of the existing methods use purely algebraic measures as their objective which might not capture the local relations within data. Thus, there is a necessity to develop new models with better computational and memory efficiency, with the particular structure of the data and problem in mind. Finally, as tensors represent the data with more faithfulness to the original structure compared to the vectorization, they also allow coupling of heterogeneous data sources where the underlying physical relationship is known. Still, most of the current work on coupled tensor decompositions does not explore supervised problems.In order to address the issues around computational and storage complexity of tensor basedmachine learning, in Chapter 2, we propose a new tensor train decomposition structure, which is a hybrid between Tucker and Tensor Train decompositions. The proposed structure is used to imple ment Tensor Train based supervised and unsupervised learning frameworks: linear discriminant analysis (LDA) and graph regularized subspace learning. The algorithm is designed to solve ex tremal eigenvalueeigenvector pair computation problems, which can be generalized to many other methods. The supervised framework, Tensor Train Discriminant Analysis (TTDA), is evaluated in a classification task with varying storage complexities with respect to classification accuracy and training time on four different datasets. The unsupervised approach, Graph Regularized TT, is evaluated on a clustering task with respect to clustering quality and training time on various storage complexities. Both frameworks are compared to discriminant analysis algorithms with similar objectives based on Tucker and TT decompositions.In Chapter 3, we present an unsupervised anomaly detection algorithm for spatiotemporaltensor data. The algorithm models the anomaly detection problem as a lowrank plus sparse tensor decomposition problem, where the normal activity is assumed to be lowrank and the anomalies are assumed to be sparse and temporally continuous. We present an extension of this algorithm, where we utilize a graph regularization term in our objective function to preserve the underlying geometry of the original data. Finally, we propose a computationally efficient implementation of this framework by approximating the nuclear norm using graph total variation minimization. The proposed approach is evaluated for both simulated data with varying levels of anomaly strength, length and number of missing entries in the observed tensor as well as urban traffic data. In Chapter 4, we propose a geometric tensor learning framework using product graph structures for tensor completion problem. Instead of purely algebraic measures such as rank, we use graph smoothness constraints that utilize geometric or topological relations within data. We prove the equivalence of a Cartesian graph structure to TTbased graph structure under some conditions. We show empirically, that introducing such relaxations due to the conditions do not deteriorate the recovery performance. We also outline a fully geometric learning method on product graphs for data completion.In Chapter 5, we introduce a supervised learning method for heterogeneous data sources suchas simultaneous EEG and fMRI. The proposed twostage method first extracts features taking the coupling across modalities into account and then introduces kernelized support tensor machines for classification. We illustrate the advantages of the proposed method on simulated and real classification tasks with small number of training data with high dimensionality.
Show less