You are here
Search results
(1  20 of 36)
Pages
 Title
 Kernel methods for biosensing applications
 Creator
 Khan, Hassan Aqeel
 Date
 2015
 Collection
 Electronic Theses & Dissertations
 Description

This thesis examines the design noise robust information retrieval techniques basedon kernel methods. Algorithms are presented for two biosensing applications: (1)High throughput protein arrays and (2) Noninvasive respiratory signal estimation.Our primary objective in protein array design is to maximize the throughput byenabling detection of an extremely large number of protein targets while using aminimal number of receptor spots. This is accomplished by viewing the proteinarray as a...
Show moreThis thesis examines the design noise robust information retrieval techniques basedon kernel methods. Algorithms are presented for two biosensing applications: (1)High throughput protein arrays and (2) Noninvasive respiratory signal estimation.Our primary objective in protein array design is to maximize the throughput byenabling detection of an extremely large number of protein targets while using aminimal number of receptor spots. This is accomplished by viewing the proteinarray as a communication channel and evaluating its information transmission capacity as a function of its receptor probes. In this framework, the channel capacitycan be used as a tool to optimize probe design; the optimal probes being the onesthat maximize capacity. The information capacity is first evaluated for a small scaleprotein array, with only a few protein targets. We believe this is the first effort toevaluate the capacity of a protein array channel. For this purpose models of theproteomic channel's noise characteristics and receptor nonidealities, based on experimental prototypes, are constructed. Kernel methods are employed to extend thecapacity evaluation to larger sized protein arrays that can potentially have thousandsof distinct protein targets. A specially designed kernel which we call the ProteomicKernel is also proposed. This kernel incorporates knowledge about the biophysicsof target and receptor interactions into the cost function employed for evaluation of channel capacity.For respiratory estimation this thesis investigates estimation of breathingrateand lungvolume using multiple noninvasive sensors under motion artifact and highnoise conditions. A spirometer signal is used as the gold standard for evaluation oferrors. A novel algorithm called the segregated envelope and carrier (SEC) estimation is proposed. This algorithm approximates the spirometer signal by an amplitudemodulated signal and segregates the estimation of the frequency and amplitude information. Results demonstrate that this approach enables effective estimation ofboth breathing rate and lung volume. An adaptive algorithm based on a combination of Gini kernel machines and wavelet filltering is also proposed. This algorithm is titledthe waveletadaptive Gini (or WAGini) algorithm, it employs a novel wavelet transform based feature extraction frontend to classify the subject's underlying respiratorystate. This information is then employed to select the parameters of the adaptive kernel machine based on the subject's respiratory state. Results demonstrate significantimprovement in breathing rate estimation when compared to traditional respiratoryestimation techniques.
Show less
 Title
 Assessment of functional connectivity in the human brain : multivariate and graph signal processing methods
 Creator
 VillafañeDelgado, Marisel
 Date
 2017
 Collection
 Electronic Theses & Dissertations
 Description

"Advances in neurophysiological recording have provided a noninvasive way of inferring cognitive processes. Recent studies have shown that cognition relies on the functional integration or connectivity of segregated specialized regions in the brain. Functional connectivity quantifies the statistical relationships among different regions in the brain. However, current functional connectivity measures have certain limitations in the quantification of global integration and characterization of...
Show more"Advances in neurophysiological recording have provided a noninvasive way of inferring cognitive processes. Recent studies have shown that cognition relies on the functional integration or connectivity of segregated specialized regions in the brain. Functional connectivity quantifies the statistical relationships among different regions in the brain. However, current functional connectivity measures have certain limitations in the quantification of global integration and characterization of network structure. These limitations include the bivariate nature of most functional connectivity measures, the computational complexity of multivariate measures, and graph theoretic measures that are not robust to network size and degree distribution. Therefore, there is a need of computationally efficient and novel measures that can quantify the functional integration across brain regions and characterize the structure of these networks. This thesis makes contributions in three different areas for the assessment of multivariate functional connectivity. First, we present a novel multivariate phase synchrony measure for quantifying the common functional connectivity within different brain regions. This measure overcomes the drawbacks of bivariate functional connectivity measures and provides insights into the mechanisms of cognitive control not accountable by bivariate measures. Following the assessment of functional connectivity from a graph theoretic perspective, we propose a graph to signal transformation for both binary and weighted networks. This provides the means for characterizing the network structure and quantifying information in the graph by overcoming some drawbacks of traditional graph based measures. Finally, we introduce a new approach to studying dynamic functional connectivity networks through signals defined over networks. In this area, we define a dynamic graph Fourier transform in which a common subspace is found from the networks over time based on the tensor decomposition of the graph Laplacian over time."Pages iiiii.
Show less
 Title
 Dynamic network analysis with applications to functional neural connectivity
 Creator
 Golibagh Mahyari, Arash
 Date
 2017
 Collection
 Electronic Theses & Dissertations
 Description

"Contemporary neuroimaging techniques provide neural activity recordings with increasing spatial and temporal resolution yielding rich multichannel datasets that can be exploited for detailed description of anatomical and functional connectivity patterns in the brain. Studies indicate that the changes in functional connectivity patterns across spatial and temporal scales play an important role in a wide range of cognitive and executive processes such as memory and attention as well as in the...
Show more"Contemporary neuroimaging techniques provide neural activity recordings with increasing spatial and temporal resolution yielding rich multichannel datasets that can be exploited for detailed description of anatomical and functional connectivity patterns in the brain. Studies indicate that the changes in functional connectivity patterns across spatial and temporal scales play an important role in a wide range of cognitive and executive processes such as memory and attention as well as in the understanding the causes of many neural diseases and psychopathologies such as epilepsy, Alzheimers, Parkinsons and schizophrenia. Early work in the area was limited to the analysis of static brain networks obtained through averaging longterm functional connectivity, thus neglecting possible timevarying connections. There is growing evidence that functional networks dynamically reorganize and coordinate on millisecond scale for the execution of mental processes. Functional networks consist of distinct network states, where each state is defined as a period of time during which the network topology is quasistationary. For this reason, there has been an interest in characterizing the dynamics of functional networks using high temporal resolution electroencephalogram recordings. In this thesis, dynamic functional connectivity networks are represented by multiway arrays, tensors, which are able to capture the complete topological structure of the networks. This thesis proposes new methods for both tracking the changes in these dynamic networks and characterizing or summarizing the network states. In order to achieve this goal, a Tucker decomposition based approach is introduced for detecting the change points for taskbased electroencephalogram (EEG) functional connectivity networks through calculating the subspace distance between consecutive time steps. This is followed by a tensormatrix projection based approach for summarizing multiple networks within a time interval. Tensor based summarization approaches do not necessarily result in sparse network and succinct states. Moreover, subspace based summarizations tend to capture the background brain activity more than the low energy sparse activations. For this reason, we propose utilizing the sparse common component and innovations (SCCI) model which simultaneously finds the sparse common component of multiple signals. However, as the number of signals in the model increases, this becomes computationally prohibitive. In this thesis, a hierarchical algorithm to recover the common component in the SCCI model is proposed for large number of signals. The hierarchical recovery of SCCI model solves the time and memory limitations at the expense of a slight decrease in the accuracy. This hierarchical model is used to separate the common and innovation components of functional connectivity networks across time. The innovation components are tracked over time to detect the change points, and the common component of the detected network states are used to obtain the network summarization. SCCI recovery algorithm finds the sparse representation of the common and innovation components of signals with respect to predetermined dictionaries. However, input signals are not always wellrepresented by predetermined dictionaries. In this thesis, a structured dictionary learning algorithm for SCCI model is developed. The proposed method is applied to EEG data collected during a study of error monitoring where two different types of brain responses are elicited in response to the stimulus. The learned dictionaries can discriminate between the response types and extract the errorrelated potentials (ERP) corresponding to the two responses."Pages iiiii.
Show less
 Title
 Harnessing lowpass filter defects for improving wireless link performance : measurements and applications
 Creator
 Renani, Alireza Ameli
 Date
 2018
 Collection
 Electronic Theses & Dissertations
 Description

"The design tradeoffs of transceiver hardware are crucial to the performance of wireless systems. The effect of such tradeoffs on individual analog and digital components are vigorously studied, but their systemic impacts beyond componentlevel remain largely unexplored. In this dissertation, we present an indepth study to characterize the surprisingly notable systemic impacts of lowpass filter design, which is a small yet indispensable component used for shaping spectrum and rejecting...
Show more"The design tradeoffs of transceiver hardware are crucial to the performance of wireless systems. The effect of such tradeoffs on individual analog and digital components are vigorously studied, but their systemic impacts beyond componentlevel remain largely unexplored. In this dissertation, we present an indepth study to characterize the surprisingly notable systemic impacts of lowpass filter design, which is a small yet indispensable component used for shaping spectrum and rejecting interference. Using a bottomup approach, we examine how signallevel distortions caused by the tradeoffs of lowpass filter design propagate to the upperlayers of wireless communication, reshaping bit error patterns and degrading link performance of today's 802.11 systems. Moreover, we propose a novel unequal error protection algorithm that harnesses lowpass filter defects for improving wireless LAN throughput, particularly to be used in forward error correction, channel coding, and applications such as video streaming. Lastly, we conduct experiments to evaluate the unequal error protection algorithm in video streaming, and we present substantial enhancements of video quality in mobile environments."Page ii.
Show less
 Title
 Smartphonebased sensing systems for dataintensive applications
 Creator
 Moazzami, MohammadMahdi
 Date
 2017
 Collection
 Electronic Theses & Dissertations
 Description

"Supported by advanced sensing capabilities, increasing computational resources and the advances in Artificial Intelligence, smartphones have become our virtual companions in our daily life. An average modern smartphone is capable of handling a wide range of tasks including navigation, advanced image processing, speech processing, cross app data processing and etc. The key facet that is common in all of these applications is the data intensive computation. In this dissertation we have taken...
Show more"Supported by advanced sensing capabilities, increasing computational resources and the advances in Artificial Intelligence, smartphones have become our virtual companions in our daily life. An average modern smartphone is capable of handling a wide range of tasks including navigation, advanced image processing, speech processing, cross app data processing and etc. The key facet that is common in all of these applications is the data intensive computation. In this dissertation we have taken steps towards the realization of the vision that makes the smartphone truly a platform for data intensive computations by proposing frameworks, applications and algorithmic solutions. We followed a datadriven approach to the system design. To this end, several challenges must be addressed before smartphones can be used as a system platform for dataintensive applications. The major challenge addressed in this dissertation include high power consumption, high computation cost in advance machine learning algorithms, lack of realtime functionalities, lack of embedded programming support, heterogeneity in the apps, communication interfaces and lack of customized data processing libraries. The contribution of this dissertation can be summarized as follows. We present the design, implementation and evaluation of the ORBIT framework, which represents the first system that combines the design requirements of a machine learning system and sensing system together at the same time. We ported for the first time offtheshelf machine learning algorithms for realtime sensor data processing to smartphone devices. We highlighted how machine learning on smartphones comes with severe costs that need to be mitigated in order to make smartphones capable of realtime dataintensive processing. From application perspective we present SPOT. SPOT aims to address some of the challenges discovered in mobilebased smarthome systems. These challenges prevent us from achieving the promises of smarthomes due to heterogeneity in different aspects of smart devices and the underlining systems. We face the following major heterogeneities in building smarthomes:: (i) Diverse appliance control apps (ii) Communication interface, (iii) Programming abstraction. SPOT makes the heterogeneous characteristics of smart appliances transparent, and by that it minimizes the burden of home automation application developers and the efforts of users who would otherwise have to deal with appliancespecific apps and control interfaces. From algorithmic perspective we introduce two systems in the smartphonebased deep learning area: DeepCrowdLabel and DeepPartition. Deep neural models are both computationally and memory intensive, making them difficult to deploy on mobile applications with limited hardware resources. On the other hand, they are the most advanced machine learning algorithms suitable for realtime sensing applications used in the wild. DeepPartition is an optimizationbased partitioning metaalgorithm featuring a tiered architecture for smartphone and the backend cloud. DeepPartition provides a profilebased model partitioning allowing it to intelligently execute the Deep Learning algorithms among the tiers to minimize the smartphone power consumption by minimizing the deep models feedforward latency. DeepCrowdLabel is prototyped for semantically labeling user's location. It is a crowdassisted algorithm that uses crowdsourcing in both training and inference time. It builds deep convolutional neural models using crowdsensed images to detect the context (label) of indoor locations. It features domain adaptation and model extension via transfer learning to efficiently build deep models for image labeling. The work presented in this dissertation covers three major facets of datadriven and computeintensive smartphonebased systems: platforms, applications and algorithms; and helps to spurs new areas of research and opens up new directions in mobile computing research."Pages iiiii.
Show less
 Title
 Higherorder data reduction through clustering, subspace analysis and compression for applications in functional connectivity brain networks
 Creator
 Ozdemir, Alp
 Date
 2017
 Collection
 Electronic Theses & Dissertations
 Description

"With the recent advances in information technology, collection and storage of higherorder datasets such as multidimensional data across multiple modalities or variables have become much easier and cheaper than ever before. Tensors, also known as multiway arrays, provide natural representations for higherorder datasets and provide a way to analyze them by preserving the multilinear relations in these large datasets. These higherorder datasets usually contain large amount of redundant...
Show more"With the recent advances in information technology, collection and storage of higherorder datasets such as multidimensional data across multiple modalities or variables have become much easier and cheaper than ever before. Tensors, also known as multiway arrays, provide natural representations for higherorder datasets and provide a way to analyze them by preserving the multilinear relations in these large datasets. These higherorder datasets usually contain large amount of redundant information and summarizing them in a succinct manner is essential for better inference. However, existing data reduction approaches are limited to vectortype data and cannot be applied directly to tensors without vectorizing. Developing more advanced approaches to analyze tensors effectively without corrupting their intrinsic structure is an important challenge facing Big Data applications. This thesis addresses the issue of data reduction for tensors with a particular focus on providing a better understanding of dynamic functional connectivity networks (dFCNs) of the brain. Functional connectivity describes the relationship between spatially separated neuronal groups and analysis of dFCNs plays a key role for interpreting complex brain dynamics in different cognitive and emotional processes. Recently, graph theoretic methods have been used to characterize the brain functionality where bivariate relationships between neuronal populations are represented as graphs or networks. In this thesis, the changes in these networks across time and subjects will be studied through tensor representations. In Chapter 2, we address a multigraph clustering problem which can be thought as a tensor partitioning problem. We introduce a hierarchical consensus spectral clustering approach to identify the community structure underlying the functional connectivity brain networks across subjects. New informationtheoretic criteria are introduced for selecting the optimal community structure. Effectiveness of the proposed algorithms are evaluated through a set of simulations comparing with the existing methods as well as on FCNs across subjects. In Chapter 3, we address the online tensor data reduction problem through a subspace tracking perspective. We introduce a robust lowrank+sparse structure learning algorithm for tensors to separate the lowrank community structure of connectivity networks from sparse outliers. The proposed framework is used to both identify change points, where the lowrank community structure changes significantly, and summarize this community structure within each time interval. Finally, in Chapter 4, we introduce a new multiscale tensor decomposition technique to efficiently encode nonlinearities due to rotation or translation in tensor type data. In particular, we develop a multiscale higherorder singular value decomposition (MSHoSVD) approach where a given tensor is first permuted and then partitioned into several subtensors each of which can be represented as a lowrank tensor increasing the efficiency of the representation. We derive a theoretical error bound for the proposed approach as well as provide analysis of memory cost and computational complexity. Performance of the proposed approach is evaluated on both data reduction and classification of various higherorder datasets."Pages iiiii.
Show less
 Title
 Adaptive independent component analysis : theoretical formulations and application to CDMA communication system with electronics implementation
 Creator
 Albataineh, Zaid
 Date
 2014
 Collection
 Electronic Theses & Dissertations
 Description

Blind Source Separation (BSS) is a vital unsupervised stochastic area that seeks to estimate the underlying source signals from their mixtures with minimal assumptions about the source signals and/or the mixing environment. BSS has been an active area of research and in recent years has been applied to numerous domains including biomedical engineering, image processing, wireless communications, speech enhancement, remote sensing, etc. Most recently, Independent Component Analysis (ICA) has...
Show moreBlind Source Separation (BSS) is a vital unsupervised stochastic area that seeks to estimate the underlying source signals from their mixtures with minimal assumptions about the source signals and/or the mixing environment. BSS has been an active area of research and in recent years has been applied to numerous domains including biomedical engineering, image processing, wireless communications, speech enhancement, remote sensing, etc. Most recently, Independent Component Analysis (ICA) has become a vital analytical approach in BSS. In spite of active research in BSS, however, many foundational issues still remain in regards to convergence speed, performance quality and robustness in realistic or adverse environments. Furthermore, some of the developed BSS methods are computationally expensive, sensitive to additive and background noise, and not suitable for a real4time or real world implementation. In this thesis, we first formulate new effective ICA4based measures and their corresponding robust adaptive algorithms for the BSS in dynamic "convolutive mixture" environments. We demonstrate their superior performance to present competing algorithms. Then we tailor their application within wireless (CDMA) communication systems and Acoustic Separation Systems. We finally explore a system realization of one of the developed algorithms among ASIC or FPGA platforms in terms of real time speed, effectiveness, cost, and economics of scale. Firstly, we propose a new class of divergence measures for Independent Component Analysis (ICA) for estimating sources from mixtures. The Convex Cauchy4Schwarz Divergence (CCS4DIV) is formed by integrating convex functions into the Cauchy4Schwarz inequality. The new measure is symmetric and convex with respect to the joint probability, where the degree of convexity can be tuned by a (convexity) parameter. A non4parametric (ICA) algorithm generated from the proposed divergence is developed exploiting convexity parameters and employing the Parzen window4based distribution estimates. The new contrast function results in effective parametric and non4parametric ICA4based computational algorithms. Moreover, two pairwise iterative schemes are proposed to tackle the high dimensionality of sources. Secondly, a new blind detection algorithm, based on fourth order cumulant matrices, is presented and applied to the multi4user symbol estimation problem in Direct Sequence Code Division Multiple Access (DS4CDMA) systems. In addition, we propose three new blind receiver schemes, which are based on the state space structures. These so4called blind state4space receivers (BSSR) do not require knowledge of the propagation parameters or spreading code sequences of the users but relies on the statistical independence assumption among the source signals. Lastly, system realization of one of the developed algorithms has been explored among ASIC or FPGA platforms in terms of cost, effectiveness, and economics of scale. Based on our findings of current stat4of4the4art electronics, programmable FPGA designs are deemed to be the most effective technology to be used for ICA hardware implementation at this time.In this thesis, we first formulate new effective ICAbased measures and their corresponding robust adaptive algorithms for the BSS in dynamic "convolutive mixture" environments. We demonstrate their superior performance to present competing algorithms. Then we tailor their application within wireless (CDMA) communication systems and Acoustic Separation Systems. We finally explore a system realization of one of the developed algorithms among ASIC or FPGA platforms in terms of real time speed, effectiveness, cost, and economics of scale.We firstly investigate several measures which are more suitable for extracting different source types from different mixing environments in the learning system. ICA for instantaneous mixtures has been studied here as an introduction to the more realistic convolutive mixture environments. Convolutive mixtures have been investigated in the time/frequency domains and we demonstrate that our approaches succeed in resolving the standing problem of scaling and permutation ambiguities in present research. We propose a new class of divergence measures for Independent Component Analysis (ICA) for estimating sources from mixtures. The Convex CauchySchwarz Divergence (CCSDIV) is formed by integrating convex functions into the CauchySchwarz inequality. The new measure is symmetric and convex with respect to the joint probability, where the degree of convexity can be tuned by a (convexity) parameter. A nonparametric (ICA) algorithm generated from the proposed divergence is developed exploiting convexity parameters and employing the Parzen windowbased distribution estimates. The new contrast function results in effective parametric and nonparametric ICAbased computational algorithms. Moreover, two pairwise iterative schemes are proposed to tackle the high dimensionality of sources. These wo pairwise nonparametric ICA algorithms are based on the new highperformance Convex CauchySchwarz Divergence (CCSDIV). These two schemes enable fast and efficient demixing of sources in realworld applications where the dimensionality of the sources is higher than two.Secondly, the more challenging problem in communication signal processing is to estimate the source signals and their channels in the presence of other cochannel signals and noise without the use of a training set. Blind techniques are promising to integrate and optimize the wireless communication designs i.e. equalizers/ filters/ combiners through its potential in suppressing the intersymbol interference (ISI), adjacent channel interference, cochannel and the multi access interference MAI. Therefore, a new blind detection algorithm, based on fourth order cumulant matrices, is presented and applied to the multiuser symbol estimation problem in Direct Sequence Code Division Multiple Access (DSCDMA) systems. The blind detection is to estimate multiple symbol sequences in the downlink of a DSCDMA communication system using only the received wireless data and without any knowledge of the user spreading codes. The proposed algorithm takes advantage of higher cumulant matrix properties to reduce the computational load and enhance performance. In addition, we address the problem of blind multiuser equalization in the wideband CDMA system, in the noisy multipath propagation environment. Herein, we propose three new blind receiver schemes, which are based on the state space structures. These socalled blind statespace receivers (BSSR) do not require knowledge of the propagation parameters or spreading code sequences of the users but relies on the statistical independence assumption among the source signals. We then develop and derive three updatelaws in order to enhance the performance of the blind detector. Also, we upgrade three semiblind adaptive detectors based on the incorporation of the RAKE receiver and the stochastic gradient algorithms which are used in several blind adaptive signal processing algorithms, namely FastICA, RobustICA, and principle component analysis PCA. Through simulation evidence, we verify the significant bit error rate (BER) and computational speed improvements achieved by these algorithms in comparison to other leading algorithms.Lastly, system realization of one of the developed algorithms has been explored among ASIC or FPGA platforms in terms of cost, effectiveness, and economics of scale. Based on our findings of current statoftheart electronics, programmable FPGA designs are deemed to be the most effective technology to be used for ICA hardware implementation at this time.
Show less
 Title
 Unconstrained 3D face reconstruction from photo collections
 Creator
 Roth, Joseph (Software engineer)
 Date
 2016
 Collection
 Electronic Theses & Dissertations
 Description

This thesis presents a novel approach for 3D face reconstruction from unconstrained photo collections. An unconstrained photo collection is a set of face images captured under an unknown and diverse variation of poses, expressions, and illuminations. The output of the proposed algorithm is a true 3D face surface model represented as a watertight triangulated surface with albedo data colloquially referred to as texture information. Reconstructing a 3D understanding of a face based on 2D input...
Show moreThis thesis presents a novel approach for 3D face reconstruction from unconstrained photo collections. An unconstrained photo collection is a set of face images captured under an unknown and diverse variation of poses, expressions, and illuminations. The output of the proposed algorithm is a true 3D face surface model represented as a watertight triangulated surface with albedo data colloquially referred to as texture information. Reconstructing a 3D understanding of a face based on 2D input is a longstanding computer vision problem. Traditional photometric stereobased reconstruction techniques work on aligned 2D images and produce a 2.5D depth map reconstruction. We extend face reconstruction to work with a true 3D model, allowing us to enjoy the benefits of using images from all poses, up to and including profiles. To use a 3D model, we propose a novel normal fieldbased Laplace editing technique which allows us to deform a triangulated mesh to match the observed surface normals. Unlike prior work that require large photo collections, we formulate an approach to adapt to photo collections with few images of potentially poor quality. We achieve this through incorporating prior knowledge about face shape by fitting a 3D Morphable Model to form a personalized template before using a novel analysisbysynthesis photometric stereo formulation to complete the fine face details. A structural similaritybased quality measure allows evaluation in the absence of ground truth 3D scans. Superior largescale experimental results are reported on Internet, synthetic, and personal photo collections.
Show less
 Title
 Stochastic modeling of routing protocols for cognitive radio networks
 Creator
 Soltani, Soroor
 Date
 2013
 Collection
 Electronic Theses & Dissertations
 Description

Cognitive radios are expected torevolutionize wireless networking because of their ability tosense, manage and share the mobile available spectrum.Efficient utilization of the available spectrum could be significantly improved by incorporating different cognitive radio based networks. Challenges are involved in utilizing the cognitive radios in a network, most of which rise from the dynamic nature of available spectrum that is not present in traditional wireless networks. The set of available...
Show moreCognitive radios are expected torevolutionize wireless networking because of their ability tosense, manage and share the mobile available spectrum.Efficient utilization of the available spectrum could be significantly improved by incorporating different cognitive radio based networks. Challenges are involved in utilizing the cognitive radios in a network, most of which rise from the dynamic nature of available spectrum that is not present in traditional wireless networks. The set of available spectrum blocks(channels) changes randomly with the arrival and departure of the users licensed to a specific spectrum band. These users are known as primary users. If a band is used by aprimary user, the cognitive radio alters its transmission power level ormodulation scheme to change its transmission range and switches to another channel.In traditional wireless networks, a link is stable if it is less prone to interference. In cognitive radio networks, however, a link that is interference free might break due to the arrival of its primary user. Therefore, links' stability forms a stochastic process with OFF and ON states; ON, if the primary user is absent. Evidently, traditional network protocols fail in this environment. New sets of protocols are needed in each layer to cope with the stochastic dynamics of cognitive radio networks.In this dissertation we present a comprehensive stochastic framework and a decision theory based model for the problem of routing packets from a source to a destination in a cognitive radio network. We begin by introducing two probability distributions called ArgMax and ArgMin for probabilistic channel selection mechanisms, routing, and MAC protocols. The ArgMax probability distribution locates the most stable link from a set of available links. Conversely, ArgMin identifies the least stable link. ArgMax and ArgMin together provide valuable information on the diversity of the stability of available links in a spectrum band. Next, considering the stochastic arrival of primary users, we model the transition of packets from one hop to the other by a SemiMarkov process and develop a Primary Spread Aware Routing Protocol (PSARP) that learns the dynamics of the environment and adapts its routing decision accordingly. Further, we use a decision theory framework. A utility function is designed to capture the effect of spectrum measurement, fluctuation of bandwidth availability and path quality. A node cognitively decides its best candidate among its neighbors by utilizing a decision tree. Each branch of the tree is quantified by the utility function and a posterior probability distribution, constructed using ArgMax probability distribution, which predicts the suitability of available neighbors. In DTCR (Decision Tree Cognitive Routing), nodes learn their operational environment and adapt their decision making accordingly. We extend the Decision tree modeling to translate video routing in a dynamic cognitive radio network into a decision theory problem. Then terminal analysis backward induction is used to produce our routing scheme that improves the peak signaltonoise ratio of the received video.We show through this dissertation that by acknowledging the stochastic property of the cognitive radio networks' environment and constructing strategies using the statistical and mathematical tools that deal with such uncertainties, the utilization of these networks will greatly improve.
Show less
 Title
 Safe Control Design for Uncertain Systems
 Creator
 Marvi, Zahra
 Date
 2021
 Collection
 Electronic Theses & Dissertations
 Description

This dissertation investigates the problem of safe control design for systems under model and environmental uncertainty. Reinforcement learning (RL) provides an interactive learning framework in which the optimal controller is sequentially derived based on instantaneous reward. Although powerful, safety consideration is a barrier to the wide deployment of RL algorithms in practice. To overcome this problem, we proposed an iterative safe offpolicy RL algorithm. The cost function that encodes...
Show moreThis dissertation investigates the problem of safe control design for systems under model and environmental uncertainty. Reinforcement learning (RL) provides an interactive learning framework in which the optimal controller is sequentially derived based on instantaneous reward. Although powerful, safety consideration is a barrier to the wide deployment of RL algorithms in practice. To overcome this problem, we proposed an iterative safe offpolicy RL algorithm. The cost function that encodes the designer's objectives is augmented with a control barrier function (CBF) to ensure safety and optimality. The proposed formulation provides a lookahead and proactive safety planning, in which the safety is planned and optimized along with the performance to minimize the intervention with the optimal controller. Extensive safety and stability analysis is provided and the proposed method is implemented using the offpolicy algorithm without requiring complete knowledge about the system dynamics. This line of research is then further extended to have a safety and stability guarantee even during the data collection and exploration phases in which random noisy inputs are applied to the system. However, satisfying the safety of actions when little is known about the system dynamics is a daunting challenge. We present a novel RL scheme that ensures the safety and stability of the linear systems during the exploration and exploitation phases. This is obtained by having a concurrent model learning and control, in which an efficient learning scheme is employed to prescribe the learning behavior. This characteristic is then employed to apply only safe and stabilizing controllers to the system. First, the prescribed errors are employed in a novel adaptive robustified control barrier function (ARCBF) which guarantees that the states of the system remain in the safe set even when the learning is incomplete. Therefore, the noisy input in the exploratory data collection phase and the optimal controller in the exploitation phase are minimally altered such that the ARCBF criterion is satisfied and, therefore, safety is guaranteed in both phases. It is shown that under the proposed prescribed RL framework, the model learning error is a vanishing perturbation to the original system. Therefore, a stability guarantee is also provided even in the exploration when noisy random inputs are applied to the system. A learningenabled barriercertified safe controllers for systems that operate in a shared and uncertain environment is then presented. A safetyaware loss function is defined and minimized to learn the uncertain and unknown behavior of external agents that affect the safety of the system. The loss function is defined based on safe set error, instead of the system model error, and is minimized for both current samples as well as past samples stored in the memory to assure a fast and generalizable learning algorithm for approximating the safe set. The proposed model learning and CBF are then integrated together to form a learningenabled zeroing CBF (LZCBF), which employs the approximated trajectory information of the external agents provided by the learned model but shrinks the safety boundary in case of an imminent safety violation using instantaneous sensory observations. It is shown that the proposed LZCBF assures the safety guarantees during learning and even in the face of inaccurate or simplified approximation of external agents, which is crucial in highly interactive environments. Finally, the cooperative capability of agents in a multiagent environment is investigated for the sake of safety guarantee. CBFs and informationgap theory are integrated to have robust safe controllers for multiagent systems with different levels of measurement accuracy. A cooperative framework for the construction of CBFs for every two agents is employed to maximize the horizon of uncertainty under which the safety of the overall system is satisfied. The informationgap theory is leveraged to determine the contribution and share of each agent in the construction of CBFs. This results in the highest possible robustness against measurement uncertainty. By employing the proposed approach in constructing CBF, a higher horizon of uncertainty can be safely tolerated and even the failure of one agent in gathering accurate local data can be compensated by cooperation between agents. The effectiveness of the proposed methods is extensively examined in simulation results.
Show less
 Title
 TENSOR LEARNING WITH STRUCTURE, GEOMETRY AND MULTIMODALITY
 Creator
 Sofuoglu, Seyyid Emre
 Date
 2022
 Collection
 Electronic Theses & Dissertations
 Description

With the advances in sensing and data acquisition technology, it is now possible to collect datafrom different modalities and sources simultaneously. Most of these data are multidimensional in nature and can be represented by multiway arrays known as tensors. For instance, a color image is a thirdorder tensor defined by two indices for spatial variables and one index for color mode. Some other examples include color video, medical imaging such as EEG and fMRI, spatiotemporal data...
Show moreWith the advances in sensing and data acquisition technology, it is now possible to collect datafrom different modalities and sources simultaneously. Most of these data are multidimensional in nature and can be represented by multiway arrays known as tensors. For instance, a color image is a thirdorder tensor defined by two indices for spatial variables and one index for color mode. Some other examples include color video, medical imaging such as EEG and fMRI, spatiotemporal data encountered in urban traffic monitoring, etc.In the past two decades, tensors have become ubiquitous in signal processing, statistics andcomputer science. Traditional unsupervised and supervised learning methods developed for one dimensional signals do not translate well to higher order data structures as they get computationally prohibitive with increasing dimensionalities. Vectorizing high dimensional inputs creates problems in nearly all machine learning tasks due to exponentially increasing dimensionality, distortion of data structure and the difficulty of obtaining sufficiently large training sample size.In this thesis, we develop tensorbased approaches to various machine learning tasks. Existingtensor based unsupervised and supervised learning algorithms extend many wellknown algorithms, e.g. 2D component analysis, support vector machines and linear discriminant analysis, with better performance and lower computational and memory costs. Most of these methods rely on Tucker decomposition which has exponential storage complexity requirements; CANDECOMPPARAFAC (CP) based methods which might not have a solution; or Tensor Train (TT) based solutions which suffer from exponentially increasing ranks. Many tensor based methods have quadratic (w.r.t the size of data), or higher computational complexity, and similarly, high memory complexity. Moreover, existing tensor based methods are not always designed with the particular structure of the data in mind. Many of the existing methods use purely algebraic measures as their objective which might not capture the local relations within data. Thus, there is a necessity to develop new models with better computational and memory efficiency, with the particular structure of the data and problem in mind. Finally, as tensors represent the data with more faithfulness to the original structure compared to the vectorization, they also allow coupling of heterogeneous data sources where the underlying physical relationship is known. Still, most of the current work on coupled tensor decompositions does not explore supervised problems.In order to address the issues around computational and storage complexity of tensor basedmachine learning, in Chapter 2, we propose a new tensor train decomposition structure, which is a hybrid between Tucker and Tensor Train decompositions. The proposed structure is used to imple ment Tensor Train based supervised and unsupervised learning frameworks: linear discriminant analysis (LDA) and graph regularized subspace learning. The algorithm is designed to solve ex tremal eigenvalueeigenvector pair computation problems, which can be generalized to many other methods. The supervised framework, Tensor Train Discriminant Analysis (TTDA), is evaluated in a classification task with varying storage complexities with respect to classification accuracy and training time on four different datasets. The unsupervised approach, Graph Regularized TT, is evaluated on a clustering task with respect to clustering quality and training time on various storage complexities. Both frameworks are compared to discriminant analysis algorithms with similar objectives based on Tucker and TT decompositions.In Chapter 3, we present an unsupervised anomaly detection algorithm for spatiotemporaltensor data. The algorithm models the anomaly detection problem as a lowrank plus sparse tensor decomposition problem, where the normal activity is assumed to be lowrank and the anomalies are assumed to be sparse and temporally continuous. We present an extension of this algorithm, where we utilize a graph regularization term in our objective function to preserve the underlying geometry of the original data. Finally, we propose a computationally efficient implementation of this framework by approximating the nuclear norm using graph total variation minimization. The proposed approach is evaluated for both simulated data with varying levels of anomaly strength, length and number of missing entries in the observed tensor as well as urban traffic data. In Chapter 4, we propose a geometric tensor learning framework using product graph structures for tensor completion problem. Instead of purely algebraic measures such as rank, we use graph smoothness constraints that utilize geometric or topological relations within data. We prove the equivalence of a Cartesian graph structure to TTbased graph structure under some conditions. We show empirically, that introducing such relaxations due to the conditions do not deteriorate the recovery performance. We also outline a fully geometric learning method on product graphs for data completion.In Chapter 5, we introduce a supervised learning method for heterogeneous data sources suchas simultaneous EEG and fMRI. The proposed twostage method first extracts features taking the coupling across modalities into account and then introduces kernelized support tensor machines for classification. We illustrate the advantages of the proposed method on simulated and real classification tasks with small number of training data with high dimensionality.
Show less
 Title
 ASSESSMENT OF CROSSFREQUENCY PHASEAMPLITUDE COUPLING IN NEURONAL OSCILLATIONS
 Creator
 Munia, Tamanna Tabassum Khan
 Date
 2021
 Collection
 Electronic Theses & Dissertations
 Description

Oscillatory activity in the brain has been associated with a wide variety of cognitive processes including decision making, feedback processing, and working memory control. The high temporal resolution provided by electroencephalography (EEG) enables the study of variation of oscillatory power and coupling across time. Various forms of neural synchrony across frequency bands have been suggested as the mechanism underlying neural binding. Recently, a considerable amount of work has focused on...
Show moreOscillatory activity in the brain has been associated with a wide variety of cognitive processes including decision making, feedback processing, and working memory control. The high temporal resolution provided by electroencephalography (EEG) enables the study of variation of oscillatory power and coupling across time. Various forms of neural synchrony across frequency bands have been suggested as the mechanism underlying neural binding. Recently, a considerable amount of work has focused on phaseamplitude coupling (PAC)– a form of crossfrequency coupling where the amplitude of a highfrequency signal is modulated by the phase of lowfrequency oscillations.The existing methods for assessing PAC have certain limitations which can influence the final PAC estimates and the subsequent neuroscientific findings. These limitations include low frequency resolution, narrowband assumption, and inherent requirement of bandpass filtering. These methods are also limited to quantifying univariate PAC and cannot capture interareal cross frequency coupling between different brain regions. Given the availability of multichannel recordings, a multivariate analysis of phaseamplitude coupling is needed to accurately quantify the coupling across multiple frequencies and brain regions. Moreover, the existing PAC measures are usually stationary in nature, focusing on phaseamplitude modulations within a particular time window or over arbitrary sliding short time windows. Therefore, there is a need for computationally efficient measures that can quantify PAC with a highfrequency resolution, track the variation of PAC with time, both in bivariate and multivariate settings and provide a better insight into the spatially distributed dynamic brain networks across different frequency bands.In this thesis, we introduce a PAC computation technique that aims to overcome some of these drawbacks and extend it to multichannel settings for quantifying dynamic crossfrequency coupling in the brain. The main contributions of the thesis are threefold. First, we present a novel time frequency based PAC (tf PAC) measure based on a highresolution complex timefrequency distribution, known as the Reduced Interference Distribution (RID)Rihaczek. This tf PAC measure overcomes the drawbacks associated with filtering by extracting instantaneous phase and amplitude components directly from the tf distribution and thus provides high resolution PAC estimates. Following the introduction of a complex timefrequencybased high resolution PAC measure, we extend this measure to multichannel settings to quantify the interareal PAC across multiple frequency bands and brain regions. We propose a tensorbased representation of multichannel PAC based on Higher Order Robust PCA (HoRPCA). The proposed method can identify the significantly coupled brain regions along with the frequency bands that are involved in the observed couplings while accurately discarding the nonsignificant or spurious couplings. Finally, we introduce a matching pursuit based dynamic PAC (MPdPAC) measure that allows us to compute PAC from time and frequency localized atoms that best describe the signal and thus capture the temporal variation of PAC using a datadriven approach. We evaluate the performance of the proposed methods on both synthesized and real EEG data collected during a cognitive controlrelated error processing study. Based on our results, we posit that the proposed multivariate and dynamic PAC measures provide a better insight into understanding the spatial, spectral, and temporal dynamics of crossfrequency phaseamplitude coupling in the brain.
Show less
 Title
 Reducing the number of ultrasound array elements with the matrix pencil method
 Creator
 Sales, Kirk L.
 Date
 2012
 Collection
 Electronic Theses & Dissertations
 Description

Phased arrays are diversely applied with some specific areas including biomedical imaging and therapy, nondestructive testing, radar and sonar. In this thesis, the matrix pencil method is employed to reduce the number of elements in a linear ultrasound phased array. The noniterative, linear method begins with a specified pressure beam pattern, reduces the dimensionality of the problem, then calculates the element locations and apodization of a reduced array. Computer simulations demonstrate...
Show morePhased arrays are diversely applied with some specific areas including biomedical imaging and therapy, nondestructive testing, radar and sonar. In this thesis, the matrix pencil method is employed to reduce the number of elements in a linear ultrasound phased array. The noniterative, linear method begins with a specified pressure beam pattern, reduces the dimensionality of the problem, then calculates the element locations and apodization of a reduced array. Computer simulations demonstrate a close comparison between the initial array beam pattern and the reduced array beam pattern for four different linear arrays. The number of elements in a broadsidesteered linear array is shown to decrease by approximately 50% with the reduced array beam pattern closely approximating the initial array beam pattern in the farfield. While the method returns a slightly tapered spacing between elements, for the arrays considered, replacing the tapered spacing with a suitablyselected uniform spacing provides very little change in the main beam and lowangle side lobes.
Show less
 Title
 Network reachability : quantification, verification, troubleshooting, and optimization
 Creator
 Khakpour, Amir Reza
 Date
 2012
 Collection
 Electronic Theses & Dissertations
 Description

Quantifying, verifying, troubleshooting, and optimizing the network reachability is essential for network management and network security monitoring as well as various aspects of network auditing, maintenance, and design. Although attempts to model network reachability have been made, feasible solutions for computing, maintaining and optimally designing network reachability have remained unknown. Network reachability control is very critical because, on one hand, reachability errors can cause...
Show moreQuantifying, verifying, troubleshooting, and optimizing the network reachability is essential for network management and network security monitoring as well as various aspects of network auditing, maintenance, and design. Although attempts to model network reachability have been made, feasible solutions for computing, maintaining and optimally designing network reachability have remained unknown. Network reachability control is very critical because, on one hand, reachability errors can cause network security breaches or service outages, leading to millions of dollars of revenue loss for an enterprise network. On the other hand, network operators suffer from lack of tools that thoroughly examine network access control configurations and audit them to avoid such errors. Besides, finding reachability errors is by no means easy. The access control rules, by which network reachability is restricted, are often very complex and manually troubleshooting them is extremely difficult. Hence, having a tool that finds the reachability errors and fix them automatically can be very useful. Furthermore, flawed network reachability design and deployment can degrade the network performance significantly. Thus, it is crucial to have a tool that designs the network configurations such that they have the least performance impact on the enterprise network.In this dissertation, we first present a network reachability model that considers connectionless and connectionoriented transport protocols, stateless and stateful routers/firewalls, static and dynamic NAT, PAT, IP tunneling, etc. We then propose a suite of algorithms for quantifying reachability based on network configurations (mainly access control lists (ACLs)) as well as solutions for querying network reachability. We further extend our algorithms and data structures for detecting reachability errors, pinpointing faulty access control lists, and fixing them automatically and efficiently. Finally, we propose algorithms to place rules on network devices optimally so that they satisfy the networks central access policies. To this end, we define correctness and performance criteria for rule placement and in turn propose costbased algorithms with adjustable parameters (for the network operators) to place rules such that the correctness and performance criteria are satisfied.We implemented the algorithms in our network reachability tool called Quarnet and conducted experiments on a university network. Experimental results show that the offline computation of reachability matrices takes a few hours and the online processing of a reachability query takes 75 milliseconds on average. We also examine our reachability error detection and correction algorithms on a few reallife networks to examine their performance and ensure that Quarnet is efficient enough to be practically useful. The results indicate that we can find reachability errors in order of minutes and fix them in order of seconds depending on the size of network and number of ACLs. Finally, we added the rule placement suite of algorithms to Quarnet, which can design a network ACL in based on the network central policies in order of tens of minutes for an enterprise network. We compare it with Purdue ACL placement, the stateoftheart access policy design technique, and explain its pros and cons.
Show less
 Title
 Hardware algorithms for highspeed packet processing
 Creator
 Norige, Eric
 Date
 2017
 Collection
 Electronic Theses & Dissertations
 Description

The networking industry is facing enormous challenges of scaling devices to support theexponential growth of internet traffic as well as increasing number of features being implemented inside the network. Algorithmic hardware improvements to networking componentshave largely been neglected due to the ease of leveraging increased clock frequency and compute power and the risks of implementing complex hardware designs. As clock frequencyslows its growth, algorithmic solutions become important...
Show moreThe networking industry is facing enormous challenges of scaling devices to support theexponential growth of internet traffic as well as increasing number of features being implemented inside the network. Algorithmic hardware improvements to networking componentshave largely been neglected due to the ease of leveraging increased clock frequency and compute power and the risks of implementing complex hardware designs. As clock frequencyslows its growth, algorithmic solutions become important to fill the gap between currentgeneration capability and next generation requirements. This paper presents algorithmicsolutions to networking problems in three domains: Deep Packet Inspection(DPI), firewall(and other) ruleset compression and noncryptographic hashing. The improvements in DPIare twopronged: first in the area of applicationlevel protocol field extraction, which allowssecurity devices to precisely identify packet fields for targeted validity checks. By usingcounting automata, we achieve precise parsing of nonregular protocols with small, constantperflow memory requirements, extracting at rates of up to 30gbps on real traffic in softwarewhile using only 112 bytes of state per flow. The second DPI improvement is on the longstanding regular expression matching problem, where we complete the HFA solution to theDFA state explosion problem with efficient construction algorithms and optimized memorylayout for hardware or software implementation. These methods construct automata toocomplex to be constructed by previous methods in seconds, while being capable of 29gbpsthroughput with an ASIC implementation. Firewall ruleset compression enables more firewall entries to be stored in a fixed capacity pattern matching engine, and can also be usedto reorganize a firewall specification for higher performance software matching. A novelrecursive structure called TUF is given to unify the best known solutions to this problemand suggest future avenues of attack. These algorithms, with little tuning, achieve a 13.7%improvement in compression on large, reallife classifiers, and can achieve the same results asexisting algorithms while running 20 times faster. Finally, noncryptographic hash functionscan be used for anything from hash tables to track network flows to packet sampling fortraffic characterization. We give a novel approach to generating hardware hash functionsin between the extremes of expensive cryptographic hash functions and low quality linearhash functions. To evaluate these midrange hash functions properly, we develop new evaluation methods to better distinguish noncryptographic hash function quality. The hashfunctions described in this paper achieve lowlatency, wide hashing with good avalanche anduniversality properties at a much lower cost than existing solutions.
Show less
 Title
 Signal Processing Based Distortion Mitigation in Interferometric Radar Angular Velocity Estimation
 Creator
 Klinefelter, Eric
 Date
 2021
 Collection
 Electronic Theses & Dissertations
 Description

Interferometric angular velocity estimation is a relatively recent radar technique which uses a pair of widely spaced antenna elements and a correlation receiver to directly measure the angular velocity of a target. Traditional radar systems measure range, radial velocity (Doppler), and angle, while angular velocity is typically derived as the timerate change of the angle measurements. The noise associated with the derived angular velocity estimate is statistically correlated with the angle...
Show moreInterferometric angular velocity estimation is a relatively recent radar technique which uses a pair of widely spaced antenna elements and a correlation receiver to directly measure the angular velocity of a target. Traditional radar systems measure range, radial velocity (Doppler), and angle, while angular velocity is typically derived as the timerate change of the angle measurements. The noise associated with the derived angular velocity estimate is statistically correlated with the angle measurements, and thus provides no additional information to traditional state space trackers. Interferometric angular velocity estimation, on the other hand, provides an independent measurement, thus forming a basis in R2 for both position and velocity.While promising results have been presented for single target interferometric angular velocity estimation, there is a known issue which arises when multiple targets are present. The ideal interferometric response with multiple targets would contain only the mixing product between like targets across the antenna responses, yet instead, the mixing product between all targets is generated, resulting in unwanted `crossterms' or intermodulation distortion. To date, various hardware based methods have been presented, which are effective, though they tend to require an increased number of antenna elements, a larger physical system baseline, or signals with wide bandwidths. Presented here are novel methods for signal processing based interferometric angular velocity estimation distortion mitigation, which can be performed with only a single antenna pair and traditional continuouswave or frequencymodulated continuous wave signals.In this work, two classes of distortion mitigation methods are described: modelbased and response decomposition. Modelbased methods use a learned or analytic model with traditional nonlinear optimization techniques to arrive at angular velocity estimates based on the complete interferometric signal response. Response decomposition methods, on the other hand, aim to decompose the individual antenna responses into separate responses pertaining to each target, then associate like targets between antenna responses. By performing the correlation in this manner, the crossterms, which typically corrupt the interferometric response, are mitigated. It was found that due to the quadratic scaling of distortion terms, modelbased methods become exceedingly difficult as the number of targets grows large. Thus, the method of response decomposition is selected and results on measured radar signals are presented. For this, a custom singleboard millimeterwave interferometric radar was developed, and angular velocity measurements were performed in an enclosed environment consisting of two robotic targets. A set of experiments was designed to highlight easy, medium, and difficult cases for the response decomposition algorithm, and results are presented herein.
Show less
 Title
 Novel Depth Representations for Depth Completion with Application in 3D Object Detection
 Creator
 Imran, Saif Muhammad
 Date
 2022
 Collection
 Electronic Theses & Dissertations
 Description

Depth completion refers to interpolating a dense, regular depth grid from sparse and irregularly sampled depth values, often guided by highresolution color imagery. The primary goal of depth completion is to estimate depth. In practice methods are trained by minimizing an error between predicted dense depth and groundtruth depth, and are evaluated by how well they minimize this error. Here we identify a second goal which is to avoid smearing depth across depth discontinuities. This second...
Show moreDepth completion refers to interpolating a dense, regular depth grid from sparse and irregularly sampled depth values, often guided by highresolution color imagery. The primary goal of depth completion is to estimate depth. In practice methods are trained by minimizing an error between predicted dense depth and groundtruth depth, and are evaluated by how well they minimize this error. Here we identify a second goal which is to avoid smearing depth across depth discontinuities. This second goal is important because it can improve downstream applications of depth completion such as object detection and pose estimation. However, we also show that the goal of minimizing error can conflict with the goal of eliminating depth smearing.In this thesis, we propose two novel representations of depths that can encode depth discontinuity across object surfaces by allowing multiple depth estimation in the spatial domain. In order to learn these new representations, we propose carefully designed loss functions and show their effectiveness in deep neural network learning. We show how our representations can avoid interobject depth mixing and also beat state of the art metrics for depth completion. The quality of groundtruth depth in realworld depth completion problems is another key challenge for learning and accurate evaluation of methods. Ground truth depth created from semiautomatic methods suffers from sparse sampling and errors at object boundaries. We show that the combination of these errors and the commonly used evaluation measure has promoted solutions that mix depths across boundaries in current methods. The thesis proposes alternate depth completion performance measures that reduce preference for mixed depths and promote sharp boundaries.The thesis also investigates whether additional points from depth completion methods can help in a challenging and highlevel perception problem; 3D object detection. It shows the effect of different depth noises originated from depth estimates on detection performances and proposes some effective ways to reduce noise in the estimate and overcome architecture limitations. The method is demonstrated on both realworld and synthetic datasets.
Show less
 Title
 LIDAR AND CAMERA CALIBRATION USING A MOUNTED SPHERE
 Creator
 Li, Jiajia
 Date
 2020
 Collection
 Electronic Theses & Dissertations
 Description

Extrinsic calibration between lidar and camera sensors is needed for multimodal sensor data fusion. However, obtaining precise extrinsic calibration can be tedious, computationally expensive, or involve elaborate apparatus. This thesis proposes a simple, fast, and robust method performing extrinsic calibration between a camera and lidar. The only required calibration target is a handheld colored sphere mounted on a whiteboard. The convolutional neural networks are developed to automatically...
Show moreExtrinsic calibration between lidar and camera sensors is needed for multimodal sensor data fusion. However, obtaining precise extrinsic calibration can be tedious, computationally expensive, or involve elaborate apparatus. This thesis proposes a simple, fast, and robust method performing extrinsic calibration between a camera and lidar. The only required calibration target is a handheld colored sphere mounted on a whiteboard. The convolutional neural networks are developed to automatically localize the sphere relative to the camera and the lidar. Then using the localization covariance models, the relative pose between the camera and lidar is derived. To evaluate the accuracy of our method, we record image and lidar data of a sphere at a set of known grid positions by using two rails mounted on a wall. The accurate calibration results are demonstrated by projecting the grid centers into the camera image plane and finding the error between these points and the handlabeled sphere centers.
Show less
 Title
 Efficient and secure system design in wireless communications
 Creator
 Song, Tianlong
 Date
 2016
 Collection
 Electronic Theses & Dissertations
 Description

Efficient and secure information transmission lies in the core part of wireless system design and networking. Comparing with its wired counterpart, in wireless communications, the total available spectrum has to be shared by different services. Moreover, wireless transmission is more vulnerable to unauthorized detection, eavesdropping and hostile jamming due to the lack of a protective physical boundary.Today, the two most representative highly efficient communication systems are CDMA (used...
Show moreEfficient and secure information transmission lies in the core part of wireless system design and networking. Comparing with its wired counterpart, in wireless communications, the total available spectrum has to be shared by different services. Moreover, wireless transmission is more vulnerable to unauthorized detection, eavesdropping and hostile jamming due to the lack of a protective physical boundary.Today, the two most representative highly efficient communication systems are CDMA (used in 3G) and OFDM (used in 4G), and OFDM is regarded as the most efficient system. This dissertation will focus on two topics: (1) Explore more spectrally efficient system design based on the 4G OFDM scheme; (2) Investigate robust wireless system design and conduct capacity analysis under different jamming scenarios. The main results are outlined as follows.First, we develop two spectrally efficient OFDMbased multicarrier transmission schemes: one with messagedriven idle subcarriers (MCMDIS), and the other with messagedriven strengthened subcarriers (MCMDSS). The basic idea in MCMDIS is to carry part of the information, named carrier bits, through idle subcarrier selection while transmitting the ordinary bits regularly on all the other subcarriers. When the number of subcarriers is much larger than the adopted constellation size, higher spectral and power efficiency can be achieved comparing with OFDM. In MCMDSS, the idle subcarriers are replaced by strengthened ones, which, unlike idle ones, can carry both carrier bits and ordinary bits. Therefore, MCMDSS achieves even higher spectral efficiency than MCMDIS.Second, we consider jammingresistant OFDM system design under fullband disguised jamming, where the jamming symbols are taken from the same constellation as the information symbols over each subcarrier. It is shown that due to the symmetricity between the authorized signal and jamming, the BER of the traditional OFDM system is lower bounded by a modulation specific constant. We develop an optimal precoding scheme, which minimizes the BER of OFDM systems under fullband disguised jamming. It is shown that the most efficient way to combat fullband disguised jamming is to concentrate the total available power and distribute it uniformly over a particular number of subcarriers instead of the entire spectrum. The precoding scheme is further randomized to reinforce the system jamming resistance.Third, we consider jamming mitigation for CDMA systems under disguised jamming, where the jammer generates a fake signal using the same spreading code, constellation and pulse shaping filter as that of the authorized signal. Again, due to the symmetricity between the authorized signal and jamming, the receiver cannot really distinguish the authorized signal from jamming, leading to complete communication failure. In this research, instead of using conventional scrambling codes, we apply advanced encryption standard (AES) to generate the securityenhanced scrambling codes. Theoretical analysis shows that: the capacity of conventional CDMA systems without secure scrambling under disguised jamming is actually zero, while the capacity can be significantly increased by secure scrambling.Finally, we consider a game between a powerlimited authorized user and a powerlimited jammer, who operate independently over the same spectrum consisting of multiple bands. The strategic decisionmaking is modeled as a twoparty zerosum game, where the payoff function is the capacity that can be achieved by the authorized user in presence of the jammer. We first investigate the game under AWGN channels. It is found that: either for the authorized user to maximize its capacity, or for the jammer to minimize the capacity of the authorized user, the best strategy is to distribute the power uniformly over all the available spectrum. Then, we consider fading channels. We characterize the dynamic relationship between the optimal signal power allocation and the optimal jamming power allocation, and propose an efficient twostep water pouring algorithm to calculate them.
Show less
 Title
 Highdimensional learning from random projections of data through regularization and diversification
 Creator
 Aghagolzadeh, Mohammad
 Date
 2015
 Collection
 Electronic Theses & Dissertations
 Description

Random signal measurement, in the form of random projections of signal vectors, extends the traditional pointwise and periodic schemes for signal sampling. In particular, the wellknown problem of sensing sparse signals from linear measurements, also known as Compressed Sensing (CS), has promoted the utility of random projections. Meanwhile, many signal processing and learning problems that involve parametric estimation do not consist of sparsity constraints in their original forms. With the...
Show moreRandom signal measurement, in the form of random projections of signal vectors, extends the traditional pointwise and periodic schemes for signal sampling. In particular, the wellknown problem of sensing sparse signals from linear measurements, also known as Compressed Sensing (CS), has promoted the utility of random projections. Meanwhile, many signal processing and learning problems that involve parametric estimation do not consist of sparsity constraints in their original forms. With the increasing popularity of random measurements, it is crucial to study the generic estimation performance under the random measurement model. In this thesis, we consider two specific learning problems (named below) and present the following two generic approaches for improving the estimation accuracy: 1) by adding relevant constraints to the parameter vectors and 2) by diversification of the random measurements to achieve fast decaying tail bounds for the empirical risk function.The first problem we consider is Dictionary Learning (DL). Dictionaries are extensions of vector bases that are specifically tailored for sparse signal representation. DL has become increasingly popular for sparse modeling of natural images as well as sound and biological signals, just to name a few. Empirical studies have shown that typical DL algorithms for imaging applications are relatively robust with respect to missing pixels in the training data. However, DL from random projections of data corresponds to an illposed problem and is not wellstudied. Existing efforts are limited to learning structured dictionaries or dictionaries for structured sparse representations to make the problem tractable. The main motivation for considering this problem is to generate an adaptive framework for CS of signals that are not sparse in the signal domain. In fact, this problem has been referred to as 'blind CS' since the optimal basis is subject to estimation during CS recovery. Our initial approach, similar to some of the existing efforts, involves adding structural constraints on the dictionary to incorporate sparse and autoregressive models. More importantly, our results and analysis reveal that DL from random projections of data, in its unconstrained form, can still be accurate given that measurements satisfy the diversity constraints defined later.The second problem that we consider is highdimensional signal classification. Prior efforts have shown that projecting highdimensional and redundant signal vectors onto random lowdimensional subspaces presents an efficient alternative to traditional feature extraction tools such as the principle component analysis. Hence, aside from the CS application, random measurements present an efficient sampling method for learning classifiers, eliminating the need for recording and processing highdimensional signals while most of the recorded data is discarded during feature extraction. We work with the Support Vector Machine (SVM) classifiers that are learned in the highdimensional ambient signal space using random projections of the training data. Our results indicate that the classifier accuracy can be significantly improved by diversification of the random measurements.
Show less