You are here
Search results
(1 - 15 of 15)
- Title
- A framework for verification of transaction level models in systemc
- Creator
- Hajisheykhi, Reza
- Date
- 2016
- Collection
- Electronic Theses & Dissertations
- Description
-
Due to their increasing complexity, today's SoC (System on Chip) systems are subject to a variety of faults (e.g., single-event upset, component crash, etc.), thereby their verification a highly important task of such systems. However, verification is a complex task in part due to the large scale of integration of SoC systems and different levels of abstraction provided by modern system design languages such as SystemC.To facilitate the verification of SoC systems, this dissertation proposes...
Show moreDue to their increasing complexity, today's SoC (System on Chip) systems are subject to a variety of faults (e.g., single-event upset, component crash, etc.), thereby their verification a highly important task of such systems. However, verification is a complex task in part due to the large scale of integration of SoC systems and different levels of abstraction provided by modern system design languages such as SystemC.To facilitate the verification of SoC systems, this dissertation proposes an approach for verifying inter-component communication protocols in SystemC Transaction Level Modeling (TLM) programs. SystemC is a widely accepted language and an IEEE standard. It includes a C++ library of abstractions and a run-time kernel that simulates the specified system, thereby enabling the early development of embedded software for the system that is being designed. To enable and facilitate the communication of different components in SystemC, the Open SystemC Initiative (OSCI) has proposed an interoperability layer (on top of SystemC) that enables transaction-based interactions between the components of a system, called Transaction Level Modeling (TLM).In order to verify SystemC TLM programs, we propose a method that includes five main steps, namely defining formal semantics, model extraction, fault modeling, model slicing, and model checking. In order to extract a formal model from the given SystemC TLM program, first we need to specify the requirements of developing a formal semantics that can capture the SystemC TLM programs while still benefiting from automation techniques for verification and/or synthesis. Based on this intuition, we utilize two model extraction approaches that consider the architecture of the given program too.In the first approach, we propose a set of transformation rules that helps us to extract a Promela model from the SystemC TLM program. In the second approach, we discuss how to extract a timed automata model from the given program.When we have the formal model, we model and inject several types of faults into the formal models extracted from the SystemC TLM programs. For injecting faults, we have developed a tool, called UFIT, that takes a formal model and a desirable fault type, and injects the faults into the model accordingly.The models extracted from the SystemC TLM program are usually very complex. Additionally, when we inject faults into these models they become even more complex. Hence, we utilize a model slicing technique to slice the models in the presence or absence of faults. We have developed a tool, called USlicer that takes a formal model along with a set of properties that needs to be verified, and generate a sliced model based on the given properties. The results show that verification time and the memory usage of the sliced version of the model is significantly smaller than that of the original model. Subsequently, in some cases where the verification of the original formal models is not even possible, using our model slicing technique makes the verification possible in a reasonable time and space.We demonstrate the proposed approach using several SystemC transaction level case studies. In each case study, we explain each step of our approach in detail and discuss the results and improvements in each of them.
Show less
- Title
- Mobility and communication in wireless robot and sensor networks
- Creator
- Pei, Yuanteng
- Date
- 2011
- Collection
- Electronic Theses & Dissertations
- Description
-
Mobility is a primary goal of many wireless communication systems. In recent years, mobile multi-hop wireless networks, such as mobile wireless sensor networks and wireless robot networks, have attracted increased attention and have been extensively studied. However, most current research does not consider the interdependence of communication and mobility and much assume an obstacle-free environment in their problem modeling and solving process.In this dissertation, we discuss several...
Show moreMobility is a primary goal of many wireless communication systems. In recent years, mobile multi-hop wireless networks, such as mobile wireless sensor networks and wireless robot networks, have attracted increased attention and have been extensively studied. However, most current research does not consider the interdependence of communication and mobility and much assume an obstacle-free environment in their problem modeling and solving process.In this dissertation, we discuss several research topics relevant to the above two issues of communication and mobility in wireless robot and sensor networks. First, we present multi-robot real-time exploration, which calls for the joint consideration of mobility and communication: it requires video and audio streams of a newly explored area be transmitted to the base station in a timely fashion as robots explore the area. Simulations show that our mobility model has achieved both improved communication quality and enhanced exploration efficiency.Second, we further investigate the above problem with two critical and real-world network conditions: (1) heterogeneous transmission ranges and link capacities, and (2) the impact of interference. The conditions increase the model complexity but significantly influence the actual available bandwidth and the required node size in placement. We jointly consider the relay placement and routing with these two critical conditions.Third, we introduce an online relay deployment paradigm to support remote sensing and control when mobile nodes migrate farther from the base station in a cost-effective system of mobile robots, static sensors and relays. A novel multi-robot real-time search method called STAtic Relay aided Search (STARS) is presented to allow robots to search in a known environment. Its solution is based on our near-optimal solution to a new variation of the multi-traveling salesman problem: precedence constrained two traveling salesman (PC2TSP).Fourth, we propose a heterogenous multi-robot exploration strategy with online relay deployment for an unknown environment called Bandwidth aware Exploration with a Steiner Traveler (BEST). In BEST, a relay-deployment node (RDN) tracks the FNs movement and places relays when necessary to support the video/audio streams aggregation to the base station. This problem inherits characteristics of both the Steiner minimum tree and traveling salesman problems. Extensive simulations show that BEST further enhances the exploration efficiency.While the first four topics deal with communication and mobility issues in powerful but expensive robotic systems, the fifth topic focuses on a special type of low cost, limited capability mobile sensors called hopping sensors, whose unique method of movement makes them suitable for rugged terrains. We present (1) a distributed message forwarding model called Binary Splitting Message Forwarding (BSMF) and (2) a grid based movement model unique to these hopping sensors. Simulation shows that our scheme significantly reduces the communication overhead and achieves relatively constant total energy consumption with varying amount of obstructions.Finally, we discuss the future work directions of this research work. We believe that a heterogeneous mobile platform to support real-time stream transmission by mobile robots, static sensor and communication devices, have great potential in various civilian and military applications, where the communication quality of service is critically important, as well as the mobility efficiency.
Show less
- Title
- Collaboration based spectrum sharing algorithms in cognitive radio networks
- Creator
- Hyder, Chowdhury Sayeed
- Date
- 2017
- Collection
- Electronic Theses & Dissertations
- Description
-
Radio spectrum assignment to wireless providers using traditional fixed allocation policy will no longer be a viable technique to meet the growing spectrum demand of emerging wireless applications. This is because while the available pool of unassigned radio spectrum is low, the spectrum already assigned to existing applications is also often underutilized in time, frequency, and location. With features like transmission flexibility and adaptability cognitive radio (CR) provides auseful means...
Show moreRadio spectrum assignment to wireless providers using traditional fixed allocation policy will no longer be a viable technique to meet the growing spectrum demand of emerging wireless applications. This is because while the available pool of unassigned radio spectrum is low, the spectrum already assigned to existing applications is also often underutilized in time, frequency, and location. With features like transmission flexibility and adaptability cognitive radio (CR) provides auseful means of spectrum sharing among growing users as an alternative to the current fixed policy. The cognitive radio network (CRN), based on the functionality of CR, consists of two types of users — primary users (PU) and secondary users (SU). Primary users are licensed users who have exclusive access rights of a fixed spectrum range. Secondary users are unlicensed users who opportunistically exploit the spectrum holes or negotiate with primary users to earn transmissionaccess rights.The CRN based efficient spectrum sharing algorithms work on different forms of collaboration between the PUs and the SUs (inter-user collaboration) and among the SUs themselves (intra-user collaboration). In the sensing based collaboration model, the SUs sense licensed spectrum and collaboratively decide about its availability based on the sensing results without any involvements from the PUs. In the relay based collaboration model, the SUs coordinate with the PUs directly,relay primary packets in exchange for transmission opportunities, and thus build a win-win cooperative framework to attainmutual benefits. In the auction based collaborationmodel, the SUs bid for temporary or permanent usage rights of unused licensed spectrum bands that are put into auction for sale by the PUs. Each of these collaboration models faces different sets of challenges towardsachieving high spectrum utilization. In this dissertation, we address some of these challenges and present a set of efficient spectrum sharing algorithms based on these collaboration models.The first work in this dissertation addresses the spectrum sensing data falsification (SSDF) attack in IEEE 802.22 wireless regional area network (WRAN) under the sensing based collaboration model. We discuss different strategies of manipulating sensing reports by one or more malicious users and how these manipulation strategies may affect the spectrum utilization. To defend against such malicious attacks, we present an adaptive reputation based clustering algorithm. The algorithm combines the clustering technique with feedback based reputation adjustment to prevent independent and collaborative SSDF attacks and quarantines the attackers from the decision making process.Our next set of work in this dissertation falls under the relay based collaboration model. We investigate the feasibility of this collaboration model in the case of real-time applications. We quantify the impact of packet deadlines and cooperation overhead on the system performance. We discuss the impact of interference that may cause from secondary transmissions. Based on the analysis, we develop an interference aware reliable cooperative framework which improves the packet reception rate of both users with low overhead. We extend our investigation of this relay based collaboration model from single hop to multiple hops in the form of cooperative routing. We formulate the routing problem as an overlapping coalition formation game where each coalition represents a routing path between primary source and destination consisting of multiple SUs as intermediate relays. The proposed model allows SUs to participate in more than one coalitions and creates more transmission opportunities for them while achieving stable routing paths for PUs.Our final set of work in this dissertation deals with the challenges in the auction based collaboration model. We consider an online setting of spectrum auctions where participation and valuation of both bidders and sellers are stochastic. We analyze the behavior of bidders and sellers in such settings and develop truthful auction mechanisms with respect to bid and time, improving spectrum reuse, auction efficiency, and revenue. The findings from our research will help to understand the underlying challenges in future networks, build a better spectrum ecosystem, and encourage new spectrum sharing models in wireless broadband communications.
Show less
- Title
- Multi-Task Learning and Its Application to Geospatio-Temporal Data
- Creator
- Xu, Jianpeng
- Date
- 2017
- Collection
- Electronic Theses & Dissertations
- Description
-
Multi-task learning (MTL) is a data mining and machine learning approach for modeling multiple prediction tasks simultaneously by exploiting the relatedness among the tasks. MTL has been successfully applied to various domains, including computer vision, healthcare, genomics, recommender systems, and natural language processing. The goals of this thesis are: (1) to investigate the feasibility of applying MTL to geospatio-temporal prediction problems, particularly those encountered in the...
Show moreMulti-task learning (MTL) is a data mining and machine learning approach for modeling multiple prediction tasks simultaneously by exploiting the relatedness among the tasks. MTL has been successfully applied to various domains, including computer vision, healthcare, genomics, recommender systems, and natural language processing. The goals of this thesis are: (1) to investigate the feasibility of applying MTL to geospatio-temporal prediction problems, particularly those encountered in the climate and environmental science domains and (2) to develop novel MTL frameworks that address the challenges of building effective predictive models from geospatio-temporal data.The first contribution of this thesis is to develop an online temporal MTL framework called ORION for ensemble forecasting problems. Ensemble forecasting uses a numerical method to simulate the evolution of nonlinear dynamic systems, such as climate and hydrological systems. ORION aims to effectively aggregate the forecasts generated by different ensemble members for a future time window, where each forecast is obtained by perturbing the starting condition of the computer model or using a different model representation. ORION considers the prediction for each time point in the forecast window as a distinct prediction task, where the task relatedness is achieved by imposing temporal smoothness and mean regularization constraints. A novel, online update with restart strategy is proposed to handle missing observations in the training data. ORION can also be optimized for different objectives, such as ε -insensitive and quantile loss functions.The second contribution of this thesis is to propose a MTL framework named GSpartan that can perform inferences at multiple locations simultaneously while allowing the local models for different locations to be jointly trained. GSpartan assumes that the local models share a common, low-rank representation and employs a graph Laplacian regularization to enforce constraints due to the inherent spatial autocorrelation of the data. Sparsity and non-negativity constraints are also incorporated into the formulation to ensure interpretability of the models.GSpartan is a MTL framework that considers only the spatial autocorrelation of the data. It is also a batch learning algorithm, which makes it difficult to scale up to global-scale data. To address these limitations, a new framework called WISDOM is proposed, which can incorporate the task relatedness across both space and time. WISDOM encodes the geospatio-temporal data as a tensor and performs supervised tensor decomposition to identify the latent factors that capture the inherent spatial and temporal variabilities of the data as well as the relationship between the predictor and target variables. The framework is unique in that it trains distinct spatial and temporal prediction models from the latent factors of the decomposed tensor and aggregates the outputs of these models to obtain the final prediction. WISDOM also employs an incremental learning algorithm that can systematically update the models when training examples are available for a new time period or for a new location.Finally, the geospatio-temporal data for many scientific applications are often available at varying spatial scales. For example, they can be generated by computer models simulated at different grid resolutions (e.g., the global and regional models used in climate modeling). A simple way to handle the predictor variables generated from the multi-scale data is to concatenate them into a single feature vector and train WISDOM using the concatenated vectors. However, this strategy may not be effective as it ignores the inherent dependencies between variables at different scales. To overcome this limitation, this thesis presents an extension of WISDOM called MUSCAT for handling multi-scale geospatio-temporal data. MUSCAT considers the consistency of the latent factors extracted from the spatio-temporal tensors at different scales while inheriting the benefits of WISDOM. Given the massive size of the multi-scale spatio-temporal tensors, a novel, supervised, incremental multi-tensor decomposition algorithm is develop to efficiently learn the model parameters.
Show less
- Title
- Hardware Algorithms for High-Speed Packet Processing
- Creator
- Norige, Eric
- Date
- 2017
- Collection
- Electronic Theses & Dissertations
- Description
-
The networking industry is facing enormous challenges of scaling devices to support theexponential growth of internet traffic as well as increasing number of features being implemented inside the network. Algorithmic hardware improvements to networking componentshave largely been neglected due to the ease of leveraging increased clock frequency and compute power and the risks of implementing complex hardware designs. As clock frequencyslows its growth, algorithmic solutions become important to...
Show moreThe networking industry is facing enormous challenges of scaling devices to support theexponential growth of internet traffic as well as increasing number of features being implemented inside the network. Algorithmic hardware improvements to networking componentshave largely been neglected due to the ease of leveraging increased clock frequency and compute power and the risks of implementing complex hardware designs. As clock frequencyslows its growth, algorithmic solutions become important to fill the gap between currentgeneration capability and next generation requirements. This paper presents algorithmicsolutions to networking problems in three domains: Deep Packet Inspection(DPI), firewall(and other) ruleset compression and non-cryptographic hashing. The improvements in DPIare two-pronged: first in the area of application-level protocol field extraction, which allowssecurity devices to precisely identify packet fields for targeted validity checks. By usingcounting automata, we achieve precise parsing of non-regular protocols with small, constantper-flow memory requirements, extracting at rates of up to 30gbps on real traffic in softwarewhile using only 112 bytes of state per flow. The second DPI improvement is on the longstanding regular expression matching problem, where we complete the HFA solution to theDFA state explosion problem with efficient construction algorithms and optimized memorylayout for hardware or software implementation. These methods construct automata toocomplex to be constructed by previous methods in seconds, while being capable of 29gbpsthroughput with an ASIC implementation. Firewall ruleset compression enables more firewall entries to be stored in a fixed capacity pattern matching engine, and can also be usedto reorganize a firewall specification for higher performance software matching. A novelrecursive structure called TUF is given to unify the best known solutions to this problemand suggest future avenues of attack. These algorithms, with little tuning, achieve a 13.7%improvement in compression on large, real-life classifiers, and can achieve the same results asexisting algorithms while running 20 times faster. Finally, non-cryptographic hash functionscan be used for anything from hash tables to track network flows to packet sampling fortraffic characterization. We give a novel approach to generating hardware hash functionsin between the extremes of expensive cryptographic hash functions and low quality linearhash functions. To evaluate these mid-range hash functions properly, we develop new evaluation methods to better distinguish non-cryptographic hash function quality. The hashfunctions described in this paper achieve low-latency, wide hashing with good avalanche anduniversality properties at a much lower cost than existing solutions.
Show less
- Title
- Multi-objective regression with application to the climate domain
- Creator
- Abraham, Zubin John
- Date
- 2013
- Collection
- Electronic Theses & Dissertations
- Description
-
Regression-based approaches are widely used in climate research to derive the statistical, spatial, and temporal relationships among climate variables. Despite its extensive literature, existing approaches are insufficient to address the unique challenges arising from the data characteristics and requirements of this domain. For example, climate variables such as precipitation have zero-inflated distributions, which render ineffective any linear regression models constructed from the data. In...
Show moreRegression-based approaches are widely used in climate research to derive the statistical, spatial, and temporal relationships among climate variables. Despite its extensive literature, existing approaches are insufficient to address the unique challenges arising from the data characteristics and requirements of this domain. For example, climate variables such as precipitation have zero-inflated distributions, which render ineffective any linear regression models constructed from the data. In addition, whereas traditional regression-based approaches emphasize on minimizing the discrepancy between observed and predicted values, there is a growing demand for regression outputs that satisfy other domain-specific criteria. To address these challenges, this thesis presents multi-objective regression frameworks designed to extend current regression-based approaches to meet the needs of climate researchers. First, a framework called Integrated Classification and Regression (ICR) is developed to accurately capture the timing of rain events and the magnitude of rain amount in zero-inflated precipitation data. The second multi-objective regression framework focuses on modeling the extreme values of a distribution without degrading its overall accuracy in predicting non-extreme values. The third framework emphasizes on both minimizing the divergence between the regression output and observed data while maximizing the fit of their cumulative distribution functions. The fourth contribution extends this framework to a multi-output setting, to ensure that the joint distribution of the multiple regression outputs is realistic and consistent with true observations.
Show less
- Title
- Automatic verification and revision of multitolerant programs
- Creator
- Chen, Jingshu
- Date
- 2013
- Collection
- Electronic Theses & Dissertations
- Description
-
The notion of multitolerance is based on the observation that modern programs are often subject to multiple faults. And, the requirements in the presence of these faults vary based on the nature of the faults, its severity and the cost of providing fault-tolerance to it. Also, assurance of multitolerant systems is necessary via they are integral parts of our lives. This dissertation proposes to provide such assurance via automated verification and revision.Regarding verification, we focus on...
Show moreThe notion of multitolerance is based on the observation that modern programs are often subject to multiple faults. And, the requirements in the presence of these faults vary based on the nature of the faults, its severity and the cost of providing fault-tolerance to it. Also, assurance of multitolerant systems is necessary via they are integral parts of our lives. This dissertation proposes to provide such assurance via automated verification and revision.Regarding verification, we focus on verification of self-stabilization, which is the ability of the program to recover from arbitrary states. We consider verification of self-stabilization because several multitolerant systems are indeed stabilizing. Also, most of literature on verification of fault-tolerance focuses on safety property; our work complements it by considering liveness properties. Hence, we envision verification of multitolerant programs by using existing approaches for verifying safety and using the results from this dissertation for verifying liveness. We propose a technique that is based on a bottleneck (fairness requirements) identified in existing work on verification of stabilization. Our approach uses the role of fairness along with symbolic model checking, and hence reduces the cost of verification substantially. We also propose a constraint-based approach that reduces the task of verifying self-stabilization into a well-studied problem of constraint solving, so that one can leverage the use of existing highly optimized solutions (SAT/SMT solvers) to reduce the verification cost.Regarding revision, we focus on revising existing programs to obtain the corresponding multitolerant ones. Revising the program manually is expensive since it requires additional verification steps to guarantee correctness. Also, manual revision may violate existing requirements. For these reasons, we propose an automatic approach to revise a given program to add multitolerance to the given class(es) of faults. Specifically, we characterize multitolerance in terms of strong multitolerance and weak multitolerance. Intuitively, strong multitolerance provides higher guarantees than weak multitolerance. However, there are scenarios where designing a strong multitolerant program is expensive or impossible although designing weak multitolerance is feasible. We investigate the complexity of automatic revision for adding multitolerance. In particular, we identify instances where adding weak multitolerance is NP-hard even though adding strong multitolerance in the same setting in P. We also develop algorithms (and heuristics) for automatic revision for adding multitolerance to existing programs. We implement these algorithms in a model repair tool for automatically adding multitolerance. Additionally, we build a lightweight framework that utilizes our model repair tool for automatically revising UML state diagram for adding fault-tolerance. This framework has several practical and methodological significance regarding the development of concurrent software. Specifically, this framework allows designers to revise an existing UML model to add fault-tolerance without a detailed knowledge of the formalism behind model repair algorithms.
Show less
- Title
- Flexible spectrum use in channel bonding wireless networks
- Creator
- Yang, Xi (Software engineer)
- Date
- 2014
- Collection
- Electronic Theses & Dissertations
- Description
-
Channel bonding, which assembles multiple narrow channels into one logical channel, can speed up data transmission and achieve better bandwidth utilization in wireless networks. Since introduced in 802.11n, channel bonding has been extended continually to support wider channels, making low-lag high-speed wireless communication possible. However, different radio technologies have different requirements on channel width. Devices that use different channel widths coexist in a contention domain...
Show moreChannel bonding, which assembles multiple narrow channels into one logical channel, can speed up data transmission and achieve better bandwidth utilization in wireless networks. Since introduced in 802.11n, channel bonding has been extended continually to support wider channels, making low-lag high-speed wireless communication possible. However, different radio technologies have different requirements on channel width. Devices that use different channel widths coexist in a contention domain may cause inefficiency and unfairness issues. For example, narrowband devices are easier to obtain medium access opportunities because they do not need to wait for the entire wide band to be idle. Therefore, although wideband devices can provide higher transmission speed, they are at an unfavorable position in contentions with narrowband devices.To this end, we propose a flexible spectrum use channel bonding (FSUB) protocol in which a node is allowed to start a transmission whenever there are some idle narrow channels and gradually increases the channel width during transmission. Because a node dynamically adjusts the channel width in a communication, we use a convolution method to achieve fast spectrum agreement between the transmitter and the receiver. To address contentions between devices in a wide band of spectrum, we introduce a compound preamble to make the collisions detectable in the frequency domain and use a parallel bitwise arbitration mechanism to quickly determine the winner. We implement and evaluate the proposed protocol through both the GNU Radio/USRP platform and ns-2 simulations. The results show that the proposed protocol well addresses the inefficiency and unfairness issues caused by heterogeneous radio coexistence. Channel bonding devices usingFSUB have more medium access opportunities and can aggregate wider channels than using other channel bonding protocols in presence of narrowband interference. The FSUB enables a device to always benefit from channel bonding without concerns about narrowband interference level.
Show less
- Title
- Optimization of environmental flow to preserve/improve ecological function
- Creator
- Herman, Matthew Ryan
- Date
- 2014
- Collection
- Electronic Theses & Dissertations
- Description
-
Freshwater is vital for all life, and with the growth of the human population, the need for this limited resource has increased. However, human activities have significant impacts on freshwater ecosystems, leading to their degradation. In order to ensure that freshwater resources remain sustainable for future generations, it is critical to understand how to evaluate stream health and mitigate degradation. To address these issues, the following research objectives were developed: 1) assess...
Show moreFreshwater is vital for all life, and with the growth of the human population, the need for this limited resource has increased. However, human activities have significant impacts on freshwater ecosystems, leading to their degradation. In order to ensure that freshwater resources remain sustainable for future generations, it is critical to understand how to evaluate stream health and mitigate degradation. To address these issues, the following research objectives were developed: 1) assess current methods used to evaluate stream health, in particular macroinvertebrate and fish stream health indices and 2) introduces a new strategy to improve stream health to a desirable condition at the lowest cost by optimizing best management practice (BMP) implementation plan. Analysis of over 85 macroinvertebrate and fish stream health indices indicated that the most commonly used macroinvertebrate and fish indices are: Benthic Index of Biotic Integrity (B-IBI), Ephemeroptera Plechoptera Trichoptera (Index) index, Hilsenhoff Biotic Index (HBI), and Index of Biological Integrity (IBI). These indices are often modified to take into account local ecosystem characteristics. In order to address objective two, several hydrological models including Soil and Water Assessment Tool and Hydrologic Integrity Tool were integrated and the results were used to develop stream health predictor models. All of the models were guided by a genetic algorithm to design the watershed-scale management strategies. The coupled system successfully identified eight BMP implementation plans that were resulted in excellent stream health conditions according to the IBI score.
Show less
- Title
- Using Evolutionary Approach to Optimize and Model Multi-Scenario, Multi-Objective Fault-Tolerant Problems
- Creator
- Zhu, Ling
- Date
- 2017
- Collection
- Electronic Theses & Dissertations
- Description
-
Fault-tolerant design involves different scenarios, such as scenarios with no fault in the system, with faults occurring randomly, with different operation conditions, and with different loading conditions. For each scenario, there can be multiple requirements (objectives). To assess the performance of a design (solution), it needs to be evaluated over a number of different scenarios containing various requirements in each scenario. We consider this problem as a multi-scenario, multi...
Show moreFault-tolerant design involves different scenarios, such as scenarios with no fault in the system, with faults occurring randomly, with different operation conditions, and with different loading conditions. For each scenario, there can be multiple requirements (objectives). To assess the performance of a design (solution), it needs to be evaluated over a number of different scenarios containing various requirements in each scenario. We consider this problem as a multi-scenario, multi-objective (MSMO) problem.Despite its practical importance and prevalence in engineering application, there are not many studies which systematically solve the MSMO problem. In this dissertation, we focus on optimizing and modeling MSMO problems, and propose various approaches to solve different types of MSMO optimization problems, especially multi-objective fault-tolerant problems. We classify MSMO optimization problem into two categories: scenario-dependent and scenario-independent. For the scenario-dependent MSMO problem, we review existing methodologies and suggest two evolutionary-based methods for handling multiple scenarios and objectives: aggregated method and integrated method. The effectiveness of both methods are demonstrated on several case studies including numerical problems and engineering design problems. The engineering problems include cantilever-type welded beam design, truss bridge design, four-bar truss design. The experimental results show that both methods can find a set of widely distributed solutions that are compromised among the respective objective values under all scenarios. We also model fault-tolerant programs using the aggregated method. We synthesize three fault-tolerant distributed programs: Byzantine agreement program, token ring circulation program and consensus program with failure detector $S$. The results show that evolutionary-base MSMO approach, as a generic method, can effectively model fault-tolerant programs. For the scenario-independent MSMO problem, we apply evolutionary multi-objective approach. As a case study, we optimize a probabilistic self-stabilizing program, a special type of fault-tolerant program, and obtain several interesting counter-intuitive observations under different scenarios.
Show less
- Title
- Energy Conservation in Heterogeneous Smartphone Ad Hoc Networks
- Creator
- Mariani, James
- Date
- 2018
- Collection
- Electronic Theses & Dissertations
- Description
-
In recent years mobile computing has been rapidly expanding to the point that there are now more devices than there are people. While once it was common for every household to have one PC, it is now common for every person to have a mobile device. With the increased use of smartphone devices, there has also been an increase in the need for mobile ad hoc networks, in which phones connect directly to each other without the need for an intermediate router. Most modern smart phones are equipped...
Show moreIn recent years mobile computing has been rapidly expanding to the point that there are now more devices than there are people. While once it was common for every household to have one PC, it is now common for every person to have a mobile device. With the increased use of smartphone devices, there has also been an increase in the need for mobile ad hoc networks, in which phones connect directly to each other without the need for an intermediate router. Most modern smart phones are equipped with both Bluetooth and Wifi Direct, where Wifi Direct has a better transmission range and rate and Bluetooth is more energy efficient. However only one or the other is used in a smartphone ad hoc network. We propose a Heterogeneous Smartphone Ad Hoc Network, HSNet, a framework to enable the automatic switching between Wifi Direct and Bluetooth to emphasize minimizing energy consumption while still maintaining an efficient network. We develop an application to evaluate the HSNet framework which shows significant energy savings when utilizing our switching algorithm to send messages by a less energy intensive technology in situations where energy conservation is desired. We discuss additional features of HSNet such as load balancing to help increase the lifetime of the network by more evenly distributing slave nodes among connected master nodes. Finally, we show that the throughput of our system is not affected due to technology switching for most scenarios. Future work of this project includes exploring energy efficient routing as well as simulation/scale testing for larger and more diverse smartphone ad hoc networks.
Show less
- Title
- IMPROVING SPECTRUM EFFICIENCY IN HETEROGENEOUS WIRELESS NETWORKS
- Creator
- Liu, Chin-Jung
- Date
- 2018
- Collection
- Electronic Theses & Dissertations
- Description
-
Over the past decades, the bandwidth-intensive applications that are previously confined to wired networks are now migrating to wireless networks. This trend has brought unprecedented high demand for wireless bandwidth. The wireless traffic is destined to dominate the Internet traffic in the future, but many of the popular wireless spectrum bands, especially the cellular and ISM bands, are already congested. On the other hand, some other wireless technologies, such as TV bands, often do not...
Show moreOver the past decades, the bandwidth-intensive applications that are previously confined to wired networks are now migrating to wireless networks. This trend has brought unprecedented high demand for wireless bandwidth. The wireless traffic is destined to dominate the Internet traffic in the future, but many of the popular wireless spectrum bands, especially the cellular and ISM bands, are already congested. On the other hand, some other wireless technologies, such as TV bands, often do not fully utilize their spectrum. However, the spectrum allocation is tightly regulated by the authority and adjusting the allocation is extremely difficult. The uneven utilization and the rigid regulation have led to the proposal of heterogeneous wireless networks, including cognitive radio networks (CRN) and heterogeneous cellular networks (HetNet). The CRNs that usually operate on different technologies from the spectrum owner attempt to reuse the idle spectrum (i.e., white space) from the owner, while HetNets attempt to improve spectrum utilization by smallcells. This dissertation addresses some of the challenging problems in these heterogeneous wireless networks.In CRNs, the secondary users (SU) are allowed to access the white spaces opportunistically as long as the SUs do not interfere with the primary users (PU, i.e., the spectrum owner). The CRN provides a promising means to improve spectral efficiency, which also introduces a set of new research challenges. We identify and discuss two problems in CRNs, namely non-contiguous control channel establishment and k-protected routing protocol design. The first problem deals with the need from SUs for a channel to transfer control information. Most existing approaches are channel-hopping (CH) based, which is inapplicable to NC-OFDM. We propose an efficient method for guaranteed NC-OFDM-based control channel establishment by utilizing short pulses on OFDM subcarriers. The results show that the time needed for establishing control channel is lower than that of CH-based approaches. The second problem deals with the interruption to a routing path in a CRN when a PU becomes active again. Existing reactive approaches that try to seek for an alternative route after PU returns suffer from potential long delay and possible interruption if an alternative cannot be found. We propose a k-protected routing protocol that builds routing paths with preassigned backups that are guaranteed to sustain from k returning PUs without being interrupted. Our result shows that the k-protected routing paths are never interrupted even when k PUs return, and have significantly shorter backup activation delays.HetNets formed by smallcells with different sizes of coverage and macrocells have been proposed to satisfy increased bandwidth demand with the limited and crowded wireless spectrum. Since the smallcells and macrocells operate on the same frequency, interference becomes a critical issue. Detecting and mitigating interference are two of the challenges introduced by HetNets. We first study the interference identification problem. Existing interference identification approaches often regard more cells as interferers than necessary. We propose to identify interference by analyzing the received patterns observed by the mobile stations. The result shows that our approach identifies all true interferers and excludes most non-interfering cells. The second research problem in HetNets is to provide effective solutions to mitigate the interference. The interference mitigation approaches in the literature mainly try to avoid interference, such as resource isolation that leads to significantly fewer resources, or power control that sacrifices signal quality and coverage. Instead of conservatively avoiding interference, we propose to mitigate the interference by precanceling the interfering signals from known interferers. With precancellation, the same set of resources can be shared between cells and thus throughput is improved.This dissertation addresses several challenges in heterogeneous wireless networks, including CRNs and HetNets. The proposed non-contiguous control channel protocol and k-protected routing protocol for CRNs can significantly improve the feasibility of CRNs in future wireless network applications. The proposed interference identification and interference precancellation approaches can effectively mitigate the interference and improve the throughput and spectrum utilization in HetNets. This dissertation aims at breaking the barriers for supporting heterogeneous wireless networks to improve the utilization of the precious and limited wireless spectrum.
Show less
- Title
- Defending against browser based data exfiltration attacks
- Creator
- Sood, Aditya
- Date
- 2013
- Collection
- Electronic Theses & Dissertations
- Description
-
The global nature of Internet has revolutionized cultural and commercial interactions while at the same time it has provided opportunities for cyber criminals. Crimeware services now exist that have transformed the nature of cyber crime by making it more automated and robust. Furthermore, these crimeware services are sold as a part of a growing underground economy. This underground economy has provided a financial incentive to create and market more sophisticated crimeware. Botnets have...
Show moreThe global nature of Internet has revolutionized cultural and commercial interactions while at the same time it has provided opportunities for cyber criminals. Crimeware services now exist that have transformed the nature of cyber crime by making it more automated and robust. Furthermore, these crimeware services are sold as a part of a growing underground economy. This underground economy has provided a financial incentive to create and market more sophisticated crimeware. Botnets have evolved to become the primary, automated crimeware. The current, third generation of botnets targets online financial institutions across the globe. Willie Sutton, the bank robber, when asked why he robbed banks is credited with replying: "That is where the money is." Today, financial institutions are online so "that is where the money is" and criminals are swarming. Because the browser is most people's window to the Internet, it has become the primary target of crimeware, bots in particular. A common task is to steal credentials for financial institutions such as accounts and passwords.Our goal is to prevent browser-based data exfiltration attacks. Currently bots use a variant of the Man-in-the-Middle attack known as the Man-in-the-Browser attack for data exfiltration. The two most widely deployed browser-based data exfiltration attacks are Form-grabbing and Web Injects. Form-grabbing is used to steal data such as credentials in web forms while the Web Injects attack is used to coerce the user to provide supplemental information such as a Social Security Number (SSN). Current security techniques emphasize detection of malware. We take the opposite approach and assume that clients are infected with malware and then work to thwart their attack. This thesis makes the following contributions:We introduce WPSeal, a method that a financial institution can use to discover that a Web-inject attack is happening so an account can be shut down before any damage occurs. This technique is done entirely on the server side (such as the financial institution's side).We developed a technique to encrypt form data, rendering it useless for theft. This technique is controlled from the server side (such as the financial institution's side). Using WPSeal, we can detect if the encryption scheme has been tampered with.We present an argument that current hooking-based capabilities of bots cannot circumvent WPSeal (as well as the encryption that WPSeal protects). That is, criminals will have to come up with a totally different class of attack.In both cases, we do not prevent the attack. Instead, we detect the attack before damage can be done, rendering the attack harmless.
Show less
- Title
- Continuous user authentication and identification using user interface interactions on mobile devices
- Creator
- Sharma, Vaibhav Bhushan
- Date
- 2015
- Collection
- Electronic Theses & Dissertations
- Description
-
We investigate whether a mobile application can continuously and unobtrusively authenticate and identify its users based on only their interactions with the User Interface of the application. A unique advantage that this modality provides over currently explored implicit modalities on mobile devices is that every user who uses the mobile application is automatically enrolled into the classification system. Every user must interact with the User Interface of an application in order to use it...
Show moreWe investigate whether a mobile application can continuously and unobtrusively authenticate and identify its users based on only their interactions with the User Interface of the application. A unique advantage that this modality provides over currently explored implicit modalities on mobile devices is that every user who uses the mobile application is automatically enrolled into the classification system. Every user must interact with the User Interface of an application in order to use it and therefore this modality is always guaranteed to have sufficient number of inputs for training and testing purposes. Using different types of input controls available on the Android platform, we collected interactions from 42 users in five different sessions. We created base classifiers from each type of input control and combine them into an ensemble classifier in order to authenticate and identify users. We find a Support Vector Machine based ensemble classifier achieves a mean equal error rate of 5% in case of authentication and a mean accuracy of 90% in case of identification. We find Support Vector Machine based ensemble classifiers outperform other techniques in both cases. While the ensemble classifier performance for authentication and identification is not found to be sufficient for it to replace current primary authentication mechanisms used in mobile applications, its truly continuous nature provides motivation for it to be used in combination with primary mechanisms.
Show less
- Title
- Surface matching and chemical scoring to detect unrelated proteins binding similar small molecules
- Creator
- Van Voorst, Jeffrey Ryan
- Date
- 2011
- Collection
- Electronic Theses & Dissertations
- Description
-
SURFACE MATCHING AND CHEMICAL SCORING TO DETECT UNRELATED PROTEINS BINDING SIMILAR SMALL MOLECULESByJeffrey Ryan Van VoorstHow can one deduce if two clefts or pockets in different protein structures bind the same small molecule if there is no significant sequence or structural similarity between the proteins? Human pattern recognition, based on extensive structural biology or ligand design experience, is the best choice when the number of sites is small. However, to be able to scale to the...
Show moreSURFACE MATCHING AND CHEMICAL SCORING TO DETECT UNRELATED PROTEINS BINDING SIMILAR SMALL MOLECULESByJeffrey Ryan Van VoorstHow can one deduce if two clefts or pockets in different protein structures bind the same small molecule if there is no significant sequence or structural similarity between the proteins? Human pattern recognition, based on extensive structural biology or ligand design experience, is the best choice when the number of sites is small. However, to be able to scale to the thousands of structures in structural databases requires implementing that experience as computational method. The primary advantage of such a computational tool is to be able to focus human expertise on a much smaller set of enriched binding sites.Although a number of tools have been developed for this purpose by many groups [61, 51, 86, 88, 91], to our knowledge, a basic hypothesis remains untested: two proteins that bind the same small molecule have binding sites with similar chemical and shape features, even when the proteins do not share significant sequence or structural similarity. A computational method to compare protein small molecule binding sites based on surface and chemical complementarity is proposed and implemented as a software package named SimSite3D. This method is protein structure based, does not rely on explicit protein sequence or main chain similarities, and does not require the alignment of atomic centers. It has been engineered to provide a detailed search of one fragment site versus a dataset of about 13,000 full ligand sites in 2&ndash4 hours (on one processor core).Several contributions are presented in this dissertation. First, several examples are presented where SimSite3D is able to find significant matches between binding sites that have similar ligand fragments bound but are unrelated in sequence or structure. Second, including the complementarity of binding site molecular surfaces helps to distinguish between sites that share a similar chemical motif, but do not necessarily bind the same molecule. Third, a number of clear examples are provided to illustrate the challenges in comparing binding sites which should be addressed in order for a binding site comparison method to gain widespread acceptance similar to that enjoyed by BLAST[3, 4]. Finally, an optimization method for addressing protein (and small molecule) flexibility in the context of binding site comparisons is presented, prototyped, and tested.Throughout the work, computational models were chosen to strike a delicate balance between achieving sufficient accuracy of alignments, discriminating between accurate and poor alignments, and discriminating between similar and dissimilar sites. Each of these criteria is important. Due to the nature of the binding site comparison problem, each criterion presents a separate challenge and may require compromises to balance performance to achieve acceptable performance in all three categories.At the present, the problem of addressing flexibility when comparing binding site surfaces has not been presented or published by any other research group. In fact, the problem of modeling flexibility to determine correspondences between binding sites is an untouched problem of great importance. Therefore, the final goal of this dissertation is to prototype and evaluate a method that uses inverse kinematics and gradient based optimization to optimize a given objective function subject to allowed protein motions encoded as stereochemical constraints. In particular, we seek to simultaneously maximize the surface and chemical complementarity of two closely aligned sites subject to directed changes in side chain dihedral angles.
Show less