You are here
Search results
(1 - 20 of 36)
Pages
- Title
- Kernel methods for biosensing applications
- Creator
- Khan, Hassan Aqeel
- Date
- 2015
- Collection
- Electronic Theses & Dissertations
- Description
-
This thesis examines the design noise robust information retrieval techniques basedon kernel methods. Algorithms are presented for two biosensing applications: (1)High throughput protein arrays and (2) Non-invasive respiratory signal estimation.Our primary objective in protein array design is to maximize the throughput byenabling detection of an extremely large number of protein targets while using aminimal number of receptor spots. This is accomplished by viewing the proteinarray as a...
Show moreThis thesis examines the design noise robust information retrieval techniques basedon kernel methods. Algorithms are presented for two biosensing applications: (1)High throughput protein arrays and (2) Non-invasive respiratory signal estimation.Our primary objective in protein array design is to maximize the throughput byenabling detection of an extremely large number of protein targets while using aminimal number of receptor spots. This is accomplished by viewing the proteinarray as a communication channel and evaluating its information transmission capacity as a function of its receptor probes. In this framework, the channel capacitycan be used as a tool to optimize probe design; the optimal probes being the onesthat maximize capacity. The information capacity is first evaluated for a small scaleprotein array, with only a few protein targets. We believe this is the first effort toevaluate the capacity of a protein array channel. For this purpose models of theproteomic channel's noise characteristics and receptor non-idealities, based on experimental prototypes, are constructed. Kernel methods are employed to extend thecapacity evaluation to larger sized protein arrays that can potentially have thousandsof distinct protein targets. A specially designed kernel which we call the ProteomicKernel is also proposed. This kernel incorporates knowledge about the biophysicsof target and receptor interactions into the cost function employed for evaluation of channel capacity.For respiratory estimation this thesis investigates estimation of breathing-rateand lung-volume using multiple non-invasive sensors under motion artifact and highnoise conditions. A spirometer signal is used as the gold standard for evaluation oferrors. A novel algorithm called the segregated envelope and carrier (SEC) estimation is proposed. This algorithm approximates the spirometer signal by an amplitudemodulated signal and segregates the estimation of the frequency and amplitude in-formation. Results demonstrate that this approach enables effective estimation ofboth breathing rate and lung volume. An adaptive algorithm based on a combination of Gini kernel machines and wavelet filltering is also proposed. This algorithm is titledthe wavelet-adaptive Gini (or WAGini) algorithm, it employs a novel wavelet trans-form based feature extraction frontend to classify the subject's underlying respiratorystate. This information is then employed to select the parameters of the adaptive kernel machine based on the subject's respiratory state. Results demonstrate significantimprovement in breathing rate estimation when compared to traditional respiratoryestimation techniques.
Show less
- Title
- Assessment of functional connectivity in the human brain : multivariate and graph signal processing methods
- Creator
- Villafañe-Delgado, Marisel
- Date
- 2017
- Collection
- Electronic Theses & Dissertations
- Description
-
"Advances in neurophysiological recording have provided a noninvasive way of inferring cognitive processes. Recent studies have shown that cognition relies on the functional integration or connectivity of segregated specialized regions in the brain. Functional connectivity quantifies the statistical relationships among different regions in the brain. However, current functional connectivity measures have certain limitations in the quantification of global integration and characterization of...
Show more"Advances in neurophysiological recording have provided a noninvasive way of inferring cognitive processes. Recent studies have shown that cognition relies on the functional integration or connectivity of segregated specialized regions in the brain. Functional connectivity quantifies the statistical relationships among different regions in the brain. However, current functional connectivity measures have certain limitations in the quantification of global integration and characterization of network structure. These limitations include the bivariate nature of most functional connectivity measures, the computational complexity of multivariate measures, and graph theoretic measures that are not robust to network size and degree distribution. Therefore, there is a need of computationally efficient and novel measures that can quantify the functional integration across brain regions and characterize the structure of these networks. This thesis makes contributions in three different areas for the assessment of multivariate functional connectivity. First, we present a novel multivariate phase synchrony measure for quantifying the common functional connectivity within different brain regions. This measure overcomes the drawbacks of bivariate functional connectivity measures and provides insights into the mechanisms of cognitive control not accountable by bivariate measures. Following the assessment of functional connectivity from a graph theoretic perspective, we propose a graph to signal transformation for both binary and weighted networks. This provides the means for characterizing the network structure and quantifying information in the graph by overcoming some drawbacks of traditional graph based measures. Finally, we introduce a new approach to studying dynamic functional connectivity networks through signals defined over networks. In this area, we define a dynamic graph Fourier transform in which a common subspace is found from the networks over time based on the tensor decomposition of the graph Laplacian over time."--Pages ii-iii.
Show less
- Title
- Dynamic network analysis with applications to functional neural connectivity
- Creator
- Golibagh Mahyari, Arash
- Date
- 2017
- Collection
- Electronic Theses & Dissertations
- Description
-
"Contemporary neuroimaging techniques provide neural activity recordings with increasing spatial and temporal resolution yielding rich multichannel datasets that can be exploited for detailed description of anatomical and functional connectivity patterns in the brain. Studies indicate that the changes in functional connectivity patterns across spatial and temporal scales play an important role in a wide range of cognitive and executive processes such as memory and attention as well as in the...
Show more"Contemporary neuroimaging techniques provide neural activity recordings with increasing spatial and temporal resolution yielding rich multichannel datasets that can be exploited for detailed description of anatomical and functional connectivity patterns in the brain. Studies indicate that the changes in functional connectivity patterns across spatial and temporal scales play an important role in a wide range of cognitive and executive processes such as memory and attention as well as in the understanding the causes of many neural diseases and psychopathologies such as epilepsy, Alzheimers, Parkinsons and schizophrenia. Early work in the area was limited to the analysis of static brain networks obtained through averaging long-term functional connectivity, thus neglecting possible time-varying connections. There is growing evidence that functional networks dynamically reorganize and coordinate on millisecond scale for the execution of mental processes. Functional networks consist of distinct network states, where each state is defined as a period of time during which the network topology is quasi-stationary. For this reason, there has been an interest in characterizing the dynamics of functional networks using high temporal resolution electroencephalogram recordings. In this thesis, dynamic functional connectivity networks are represented by multiway arrays, tensors, which are able to capture the complete topological structure of the networks. This thesis proposes new methods for both tracking the changes in these dynamic networks and characterizing or summarizing the network states. In order to achieve this goal, a Tucker decomposition based approach is introduced for detecting the change points for task-based electroencephalogram (EEG) functional connectivity networks through calculating the subspace distance between consecutive time steps. This is followed by a tensor-matrix projection based approach for summarizing multiple networks within a time interval. Tensor based summarization approaches do not necessarily result in sparse network and succinct states. Moreover, subspace based summarizations tend to capture the background brain activity more than the low energy sparse activations. For this reason, we propose utilizing the sparse common component and innovations (SCCI) model which simultaneously finds the sparse common component of multiple signals. However, as the number of signals in the model increases, this becomes computationally prohibitive. In this thesis, a hierarchical algorithm to recover the common component in the SCCI model is proposed for large number of signals. The hierarchical recovery of SCCI model solves the time and memory limitations at the expense of a slight decrease in the accuracy. This hierarchical model is used to separate the common and innovation components of functional connectivity networks across time. The innovation components are tracked over time to detect the change points, and the common component of the detected network states are used to obtain the network summarization. SCCI recovery algorithm finds the sparse representation of the common and innovation components of signals with respect to pre-determined dictionaries. However, input signals are not always well-represented by pre-determined dictionaries. In this thesis, a structured dictionary learning algorithm for SCCI model is developed. The proposed method is applied to EEG data collected during a study of error monitoring where two different types of brain responses are elicited in response to the stimulus. The learned dictionaries can discriminate between the response types and extract the error-related potentials (ERP) corresponding to the two responses."--Pages ii-iii.
Show less
- Title
- Harnessing low-pass filter defects for improving wireless link performance : measurements and applications
- Creator
- Renani, Alireza Ameli
- Date
- 2018
- Collection
- Electronic Theses & Dissertations
- Description
-
"The design trade-offs of transceiver hardware are crucial to the performance of wireless systems. The effect of such trade-offs on individual analog and digital components are vigorously studied, but their systemic impacts beyond component-level remain largely unexplored. In this dissertation, we present an in-depth study to characterize the surprisingly notable systemic impacts of low-pass filter design, which is a small yet indispensable component used for shaping spectrum and rejecting...
Show more"The design trade-offs of transceiver hardware are crucial to the performance of wireless systems. The effect of such trade-offs on individual analog and digital components are vigorously studied, but their systemic impacts beyond component-level remain largely unexplored. In this dissertation, we present an in-depth study to characterize the surprisingly notable systemic impacts of low-pass filter design, which is a small yet indispensable component used for shaping spectrum and rejecting interference. Using a bottom-up approach, we examine how signal-level distortions caused by the trade-offs of low-pass filter design propagate to the upper-layers of wireless communication, reshaping bit error patterns and degrading link performance of today's 802.11 systems. Moreover, we propose a novel unequal error protection algorithm that harnesses low-pass filter defects for improving wireless LAN throughput, particularly to be used in forward error correction, channel coding, and applications such as video streaming. Lastly, we conduct experiments to evaluate the unequal error protection algorithm in video streaming, and we present substantial enhancements of video quality in mobile environments."--Page ii.
Show less
- Title
- Smartphone-based sensing systems for data-intensive applications
- Creator
- Moazzami, Mohammad-Mahdi
- Date
- 2017
- Collection
- Electronic Theses & Dissertations
- Description
-
"Supported by advanced sensing capabilities, increasing computational resources and the advances in Artificial Intelligence, smartphones have become our virtual companions in our daily life. An average modern smartphone is capable of handling a wide range of tasks including navigation, advanced image processing, speech processing, cross app data processing and etc. The key facet that is common in all of these applications is the data intensive computation. In this dissertation we have taken...
Show more"Supported by advanced sensing capabilities, increasing computational resources and the advances in Artificial Intelligence, smartphones have become our virtual companions in our daily life. An average modern smartphone is capable of handling a wide range of tasks including navigation, advanced image processing, speech processing, cross app data processing and etc. The key facet that is common in all of these applications is the data intensive computation. In this dissertation we have taken steps towards the realization of the vision that makes the smartphone truly a platform for data intensive computations by proposing frameworks, applications and algorithmic solutions. We followed a data-driven approach to the system design. To this end, several challenges must be addressed before smartphones can be used as a system platform for data-intensive applications. The major challenge addressed in this dissertation include high power consumption, high computation cost in advance machine learning algorithms, lack of real-time functionalities, lack of embedded programming support, heterogeneity in the apps, communication interfaces and lack of customized data processing libraries. The contribution of this dissertation can be summarized as follows. We present the design, implementation and evaluation of the ORBIT framework, which represents the first system that combines the design requirements of a machine learning system and sensing system together at the same time. We ported for the first time off-the-shelf machine learning algorithms for real-time sensor data processing to smartphone devices. We highlighted how machine learning on smartphones comes with severe costs that need to be mitigated in order to make smartphones capable of real-time data-intensive processing. From application perspective we present SPOT. SPOT aims to address some of the challenges discovered in mobile-based smart-home systems. These challenges prevent us from achieving the promises of smart-homes due to heterogeneity in different aspects of smart devices and the underlining systems. We face the following major heterogeneities in building smart-homes:: (i) Diverse appliance control apps (ii) Communication interface, (iii) Programming abstraction. SPOT makes the heterogeneous characteristics of smart appliances transparent, and by that it minimizes the burden of home automation application developers and the efforts of users who would otherwise have to deal with appliance-specific apps and control interfaces. From algorithmic perspective we introduce two systems in the smartphone-based deep learning area: Deep-Crowd-Label and Deep-Partition. Deep neural models are both computationally and memory intensive, making them difficult to deploy on mobile applications with limited hardware resources. On the other hand, they are the most advanced machine learning algorithms suitable for real-time sensing applications used in the wild. Deep-Partition is an optimization-based partitioning meta-algorithm featuring a tiered architecture for smartphone and the back-end cloud. Deep-Partition provides a profile-based model partitioning allowing it to intelligently execute the Deep Learning algorithms among the tiers to minimize the smartphone power consumption by minimizing the deep models feed-forward latency. Deep-Crowd-Label is prototyped for semantically labeling user's location. It is a crowd-assisted algorithm that uses crowd-sourcing in both training and inference time. It builds deep convolutional neural models using crowd-sensed images to detect the context (label) of indoor locations. It features domain adaptation and model extension via transfer learning to efficiently build deep models for image labeling. The work presented in this dissertation covers three major facets of data-driven and compute-intensive smartphone-based systems: platforms, applications and algorithms; and helps to spurs new areas of research and opens up new directions in mobile computing research."--Pages ii-iii.
Show less
- Title
- Higher-order data reduction through clustering, subspace analysis and compression for applications in functional connectivity brain networks
- Creator
- Ozdemir, Alp
- Date
- 2017
- Collection
- Electronic Theses & Dissertations
- Description
-
"With the recent advances in information technology, collection and storage of higher-order datasets such as multidimensional data across multiple modalities or variables have become much easier and cheaper than ever before. Tensors, also known as multiway arrays, provide natural representations for higher-order datasets and provide a way to analyze them by preserving the multilinear relations in these large datasets. These higher-order datasets usually contain large amount of redundant...
Show more"With the recent advances in information technology, collection and storage of higher-order datasets such as multidimensional data across multiple modalities or variables have become much easier and cheaper than ever before. Tensors, also known as multiway arrays, provide natural representations for higher-order datasets and provide a way to analyze them by preserving the multilinear relations in these large datasets. These higher-order datasets usually contain large amount of redundant information and summarizing them in a succinct manner is essential for better inference. However, existing data reduction approaches are limited to vector-type data and cannot be applied directly to tensors without vectorizing. Developing more advanced approaches to analyze tensors effectively without corrupting their intrinsic structure is an important challenge facing Big Data applications. This thesis addresses the issue of data reduction for tensors with a particular focus on providing a better understanding of dynamic functional connectivity networks (dFCNs) of the brain. Functional connectivity describes the relationship between spatially separated neuronal groups and analysis of dFCNs plays a key role for interpreting complex brain dynamics in different cognitive and emotional processes. Recently, graph theoretic methods have been used to characterize the brain functionality where bivariate relationships between neuronal populations are represented as graphs or networks. In this thesis, the changes in these networks across time and subjects will be studied through tensor representations. In Chapter 2, we address a multi-graph clustering problem which can be thought as a tensor partitioning problem. We introduce a hierarchical consensus spectral clustering approach to identify the community structure underlying the functional connectivity brain networks across subjects. New information-theoretic criteria are introduced for selecting the optimal community structure. Effectiveness of the proposed algorithms are evaluated through a set of simulations comparing with the existing methods as well as on FCNs across subjects. In Chapter 3, we address the online tensor data reduction problem through a subspace tracking perspective. We introduce a robust low-rank+sparse structure learning algorithm for tensors to separate the low-rank community structure of connectivity networks from sparse outliers. The proposed framework is used to both identify change points, where the low-rank community structure changes significantly, and summarize this community structure within each time interval. Finally, in Chapter 4, we introduce a new multi-scale tensor decomposition technique to efficiently encode nonlinearities due to rotation or translation in tensor type data. In particular, we develop a multi-scale higher-order singular value decomposition (MS-HoSVD) approach where a given tensor is first permuted and then partitioned into several sub-tensors each of which can be represented as a low-rank tensor increasing the efficiency of the representation. We derive a theoretical error bound for the proposed approach as well as provide analysis of memory cost and computational complexity. Performance of the proposed approach is evaluated on both data reduction and classification of various higher-order datasets."--Pages ii-iii.
Show less
- Title
- Adaptive independent component analysis : theoretical formulations and application to CDMA communication system with electronics implementation
- Creator
- Albataineh, Zaid
- Date
- 2014
- Collection
- Electronic Theses & Dissertations
- Description
-
Blind Source Separation (BSS) is a vital unsupervised stochastic area that seeks to estimate the underlying source signals from their mixtures with minimal assumptions about the source signals and/or the mixing environment. BSS has been an active area of research and in recent years has been applied to numerous domains including biomedical engineering, image processing, wireless communications, speech enhancement, remote sensing, etc. Most recently, Independent Component Analysis (ICA) has...
Show moreBlind Source Separation (BSS) is a vital unsupervised stochastic area that seeks to estimate the underlying source signals from their mixtures with minimal assumptions about the source signals and/or the mixing environment. BSS has been an active area of research and in recent years has been applied to numerous domains including biomedical engineering, image processing, wireless communications, speech enhancement, remote sensing, etc. Most recently, Independent Component Analysis (ICA) has become a vital analytical approach in BSS. In spite of active research in BSS, however, many foundational issues still remain in regards to convergence speed, performance quality and robustness in realistic or adverse environments. Furthermore, some of the developed BSS methods are computationally expensive, sensitive to additive and background noise, and not suitable for a real4time or real world implementation. In this thesis, we first formulate new effective ICA4based measures and their corresponding robust adaptive algorithms for the BSS in dynamic "convolutive mixture" environments. We demonstrate their superior performance to present competing algorithms. Then we tailor their application within wireless (CDMA) communication systems and Acoustic Separation Systems. We finally explore a system realization of one of the developed algorithms among ASIC or FPGA platforms in terms of real time speed, effectiveness, cost, and economics of scale. Firstly, we propose a new class of divergence measures for Independent Component Analysis (ICA) for estimating sources from mixtures. The Convex Cauchy4Schwarz Divergence (CCS4DIV) is formed by integrating convex functions into the Cauchy4Schwarz inequality. The new measure is symmetric and convex with respect to the joint probability, where the degree of convexity can be tuned by a (convexity) parameter. A non4parametric (ICA) algorithm generated from the proposed divergence is developed exploiting convexity parameters and employing the Parzen window4based distribution estimates. The new contrast function results in effective parametric and non4parametric ICA4based computational algorithms. Moreover, two pairwise iterative schemes are proposed to tackle the high dimensionality of sources. Secondly, a new blind detection algorithm, based on fourth order cumulant matrices, is presented and applied to the multi4user symbol estimation problem in Direct Sequence Code Division Multiple Access (DS4CDMA) systems. In addition, we propose three new blind receiver schemes, which are based on the state space structures. These so4called blind state4space receivers (BSSR) do not require knowledge of the propagation parameters or spreading code sequences of the users but relies on the statistical independence assumption among the source signals. Lastly, system realization of one of the developed algorithms has been explored among ASIC or FPGA platforms in terms of cost, effectiveness, and economics of scale. Based on our findings of current stat4of4the4art electronics, programmable FPGA designs are deemed to be the most effective technology to be used for ICA hardware implementation at this time.In this thesis, we first formulate new effective ICA-based measures and their corresponding robust adaptive algorithms for the BSS in dynamic "convolutive mixture" environments. We demonstrate their superior performance to present competing algorithms. Then we tailor their application within wireless (CDMA) communication systems and Acoustic Separation Systems. We finally explore a system realization of one of the developed algorithms among ASIC or FPGA platforms in terms of real time speed, effectiveness, cost, and economics of scale.We firstly investigate several measures which are more suitable for extracting different source types from different mixing environments in the learning system. ICA for instantaneous mixtures has been studied here as an introduction to the more realistic convolutive mixture environments. Convolutive mixtures have been investigated in the time/frequency domains and we demonstrate that our approaches succeed in resolving the standing problem of scaling and permutation ambiguities in present research. We propose a new class of divergence measures for Independent Component Analysis (ICA) for estimating sources from mixtures. The Convex Cauchy-Schwarz Divergence (CCS-DIV) is formed by integrating convex functions into the Cauchy-Schwarz inequality. The new measure is symmetric and convex with respect to the joint probability, where the degree of convexity can be tuned by a (convexity) parameter. A non-parametric (ICA) algorithm generated from the proposed divergence is developed exploiting convexity parameters and employing the Parzen window-based distribution estimates. The new contrast function results in effective parametric and non-parametric ICA-based computational algorithms. Moreover, two pairwise iterative schemes are proposed to tackle the high dimensionality of sources. These wo pairwise non-parametric ICA algorithms are based on the new high-performance Convex Cauchy-Schwarz Divergence (CCS-DIV). These two schemes enable fast and efficient de-mixing of sources in real-world applications where the dimensionality of the sources is higher than two.Secondly, the more challenging problem in communication signal processing is to estimate the source signals and their channels in the presence of other co-channel signals and noise without the use of a training set. Blind techniques are promising to integrate and optimize the wireless communication designs i.e. equalizers/ filters/ combiners through its potential in suppressing the inter-symbol interference (ISI), adjacent channel interference, co-channel and the multi access interference MAI. Therefore, a new blind detection algorithm, based on fourth order cumulant matrices, is presented and applied to the multi-user symbol estimation problem in Direct Sequence Code Division Multiple Access (DS-CDMA) systems. The blind detection is to estimate multiple symbol sequences in the downlink of a DS-CDMA communication system using only the received wireless data and without any knowledge of the user spreading codes. The proposed algorithm takes advantage of higher cumulant matrix properties to reduce the computational load and enhance performance. In addition, we address the problem of blind multiuser equalization in the wideband CDMA system, in the noisy multipath propagation environment. Herein, we propose three new blind receiver schemes, which are based on the state space structures. These so-called blind state-space receivers (BSSR) do not require knowledge of the propagation parameters or spreading code sequences of the users but relies on the statistical independence assumption among the source signals. We then develop and derive three update-laws in order to enhance the performance of the blind detector. Also, we upgrade three semi-blind adaptive detectors based on the incorporation of the RAKE receiver and the stochastic gradient algorithms which are used in several blind adaptive signal processing algorithms, namely FastICA, RobustICA, and principle component analysis PCA. Through simulation evidence, we verify the significant bit error rate (BER) and computational speed improvements achieved by these algorithms in comparison to other leading algorithms.Lastly, system realization of one of the developed algorithms has been explored among ASIC or FPGA platforms in terms of cost, effectiveness, and economics of scale. Based on our findings of current stat-of-the-art electronics, programmable FPGA designs are deemed to be the most effective technology to be used for ICA hardware implementation at this time.
Show less
- Title
- Unconstrained 3D face reconstruction from photo collections
- Creator
- Roth, Joseph (Software engineer)
- Date
- 2016
- Collection
- Electronic Theses & Dissertations
- Description
-
This thesis presents a novel approach for 3D face reconstruction from unconstrained photo collections. An unconstrained photo collection is a set of face images captured under an unknown and diverse variation of poses, expressions, and illuminations. The output of the proposed algorithm is a true 3D face surface model represented as a watertight triangulated surface with albedo data colloquially referred to as texture information. Reconstructing a 3D understanding of a face based on 2D input...
Show moreThis thesis presents a novel approach for 3D face reconstruction from unconstrained photo collections. An unconstrained photo collection is a set of face images captured under an unknown and diverse variation of poses, expressions, and illuminations. The output of the proposed algorithm is a true 3D face surface model represented as a watertight triangulated surface with albedo data colloquially referred to as texture information. Reconstructing a 3D understanding of a face based on 2D input is a long-standing computer vision problem. Traditional photometric stereo-based reconstruction techniques work on aligned 2D images and produce a 2.5D depth map reconstruction. We extend face reconstruction to work with a true 3D model, allowing us to enjoy the benefits of using images from all poses, up to and including profiles. To use a 3D model, we propose a novel normal field-based Laplace editing technique which allows us to deform a triangulated mesh to match the observed surface normals. Unlike prior work that require large photo collections, we formulate an approach to adapt to photo collections with few images of potentially poor quality. We achieve this through incorporating prior knowledge about face shape by fitting a 3D Morphable Model to form a personalized template before using a novel analysis-by-synthesis photometric stereo formulation to complete the fine face details. A structural similarity-based quality measure allows evaluation in the absence of ground truth 3D scans. Superior large-scale experimental results are reported on Internet, synthetic, and personal photo collections.
Show less
- Title
- Stochastic modeling of routing protocols for cognitive radio networks
- Creator
- Soltani, Soroor
- Date
- 2013
- Collection
- Electronic Theses & Dissertations
- Description
-
Cognitive radios are expected torevolutionize wireless networking because of their ability tosense, manage and share the mobile available spectrum.Efficient utilization of the available spectrum could be significantly improved by incorporating different cognitive radio based networks. Challenges are involved in utilizing the cognitive radios in a network, most of which rise from the dynamic nature of available spectrum that is not present in traditional wireless networks. The set of available...
Show moreCognitive radios are expected torevolutionize wireless networking because of their ability tosense, manage and share the mobile available spectrum.Efficient utilization of the available spectrum could be significantly improved by incorporating different cognitive radio based networks. Challenges are involved in utilizing the cognitive radios in a network, most of which rise from the dynamic nature of available spectrum that is not present in traditional wireless networks. The set of available spectrum blocks(channels) changes randomly with the arrival and departure of the users licensed to a specific spectrum band. These users are known as primary users. If a band is used by aprimary user, the cognitive radio alters its transmission power level ormodulation scheme to change its transmission range and switches to another channel.In traditional wireless networks, a link is stable if it is less prone to interference. In cognitive radio networks, however, a link that is interference free might break due to the arrival of its primary user. Therefore, links' stability forms a stochastic process with OFF and ON states; ON, if the primary user is absent. Evidently, traditional network protocols fail in this environment. New sets of protocols are needed in each layer to cope with the stochastic dynamics of cognitive radio networks.In this dissertation we present a comprehensive stochastic framework and a decision theory based model for the problem of routing packets from a source to a destination in a cognitive radio network. We begin by introducing two probability distributions called ArgMax and ArgMin for probabilistic channel selection mechanisms, routing, and MAC protocols. The ArgMax probability distribution locates the most stable link from a set of available links. Conversely, ArgMin identifies the least stable link. ArgMax and ArgMin together provide valuable information on the diversity of the stability of available links in a spectrum band. Next, considering the stochastic arrival of primary users, we model the transition of packets from one hop to the other by a Semi-Markov process and develop a Primary Spread Aware Routing Protocol (PSARP) that learns the dynamics of the environment and adapts its routing decision accordingly. Further, we use a decision theory framework. A utility function is designed to capture the effect of spectrum measurement, fluctuation of bandwidth availability and path quality. A node cognitively decides its best candidate among its neighbors by utilizing a decision tree. Each branch of the tree is quantified by the utility function and a posterior probability distribution, constructed using ArgMax probability distribution, which predicts the suitability of available neighbors. In DTCR (Decision Tree Cognitive Routing), nodes learn their operational environment and adapt their decision making accordingly. We extend the Decision tree modeling to translate video routing in a dynamic cognitive radio network into a decision theory problem. Then terminal analysis backward induction is used to produce our routing scheme that improves the peak signal-to-noise ratio of the received video.We show through this dissertation that by acknowledging the stochastic property of the cognitive radio networks' environment and constructing strategies using the statistical and mathematical tools that deal with such uncertainties, the utilization of these networks will greatly improve.
Show less
- Title
- Safe Control Design for Uncertain Systems
- Creator
- Marvi, Zahra
- Date
- 2021
- Collection
- Electronic Theses & Dissertations
- Description
-
This dissertation investigates the problem of safe control design for systems under model and environmental uncertainty. Reinforcement learning (RL) provides an interactive learning framework in which the optimal controller is sequentially derived based on instantaneous reward. Although powerful, safety consideration is a barrier to the wide deployment of RL algorithms in practice. To overcome this problem, we proposed an iterative safe off-policy RL algorithm. The cost function that encodes...
Show moreThis dissertation investigates the problem of safe control design for systems under model and environmental uncertainty. Reinforcement learning (RL) provides an interactive learning framework in which the optimal controller is sequentially derived based on instantaneous reward. Although powerful, safety consideration is a barrier to the wide deployment of RL algorithms in practice. To overcome this problem, we proposed an iterative safe off-policy RL algorithm. The cost function that encodes the designer's objectives is augmented with a control barrier function (CBF) to ensure safety and optimality. The proposed formulation provides a look-ahead and proactive safety planning, in which the safety is planned and optimized along with the performance to minimize the intervention with the optimal controller. Extensive safety and stability analysis is provided and the proposed method is implemented using the off-policy algorithm without requiring complete knowledge about the system dynamics. This line of research is then further extended to have a safety and stability guarantee even during the data collection and exploration phases in which random noisy inputs are applied to the system. However, satisfying the safety of actions when little is known about the system dynamics is a daunting challenge. We present a novel RL scheme that ensures the safety and stability of the linear systems during the exploration and exploitation phases. This is obtained by having a concurrent model learning and control, in which an efficient learning scheme is employed to prescribe the learning behavior. This characteristic is then employed to apply only safe and stabilizing controllers to the system. First, the prescribed errors are employed in a novel adaptive robustified control barrier function (AR-CBF) which guarantees that the states of the system remain in the safe set even when the learning is incomplete. Therefore, the noisy input in the exploratory data collection phase and the optimal controller in the exploitation phase are minimally altered such that the AR-CBF criterion is satisfied and, therefore, safety is guaranteed in both phases. It is shown that under the proposed prescribed RL framework, the model learning error is a vanishing perturbation to the original system. Therefore, a stability guarantee is also provided even in the exploration when noisy random inputs are applied to the system. A learning-enabled barrier-certified safe controllers for systems that operate in a shared and uncertain environment is then presented. A safety-aware loss function is defined and minimized to learn the uncertain and unknown behavior of external agents that affect the safety of the system. The loss function is defined based on safe set error, instead of the system model error, and is minimized for both current samples as well as past samples stored in the memory to assure a fast and generalizable learning algorithm for approximating the safe set. The proposed model learning and CBF are then integrated together to form a learning-enabled zeroing CBF (L-ZCBF), which employs the approximated trajectory information of the external agents provided by the learned model but shrinks the safety boundary in case of an imminent safety violation using instantaneous sensory observations. It is shown that the proposed L-ZCBF assures the safety guarantees during learning and even in the face of inaccurate or simplified approximation of external agents, which is crucial in highly interactive environments. Finally, the cooperative capability of agents in a multi-agent environment is investigated for the sake of safety guarantee. CBFs and information-gap theory are integrated to have robust safe controllers for multi-agent systems with different levels of measurement accuracy. A cooperative framework for the construction of CBFs for every two agents is employed to maximize the horizon of uncertainty under which the safety of the overall system is satisfied. The information-gap theory is leveraged to determine the contribution and share of each agent in the construction of CBFs. This results in the highest possible robustness against measurement uncertainty. By employing the proposed approach in constructing CBF, a higher horizon of uncertainty can be safely tolerated and even the failure of one agent in gathering accurate local data can be compensated by cooperation between agents. The effectiveness of the proposed methods is extensively examined in simulation results.
Show less
- Title
- TENSOR LEARNING WITH STRUCTURE, GEOMETRY AND MULTI-MODALITY
- Creator
- Sofuoglu, Seyyid Emre
- Date
- 2022
- Collection
- Electronic Theses & Dissertations
- Description
-
With the advances in sensing and data acquisition technology, it is now possible to collect datafrom different modalities and sources simultaneously. Most of these data are multi-dimensional in nature and can be represented by multiway arrays known as tensors. For instance, a color image is a third-order tensor defined by two indices for spatial variables and one index for color mode. Some other examples include color video, medical imaging such as EEG and fMRI, spatiotemporal data...
Show moreWith the advances in sensing and data acquisition technology, it is now possible to collect datafrom different modalities and sources simultaneously. Most of these data are multi-dimensional in nature and can be represented by multiway arrays known as tensors. For instance, a color image is a third-order tensor defined by two indices for spatial variables and one index for color mode. Some other examples include color video, medical imaging such as EEG and fMRI, spatiotemporal data encountered in urban traffic monitoring, etc.In the past two decades, tensors have become ubiquitous in signal processing, statistics andcomputer science. Traditional unsupervised and supervised learning methods developed for one- dimensional signals do not translate well to higher order data structures as they get computationally prohibitive with increasing dimensionalities. Vectorizing high dimensional inputs creates problems in nearly all machine learning tasks due to exponentially increasing dimensionality, distortion of data structure and the difficulty of obtaining sufficiently large training sample size.In this thesis, we develop tensor-based approaches to various machine learning tasks. Existingtensor based unsupervised and supervised learning algorithms extend many well-known algorithms, e.g. 2-D component analysis, support vector machines and linear discriminant analysis, with better performance and lower computational and memory costs. Most of these methods rely on Tucker decomposition which has exponential storage complexity requirements; CANDECOMP-PARAFAC (CP) based methods which might not have a solution; or Tensor Train (TT) based solutions which suffer from exponentially increasing ranks. Many tensor based methods have quadratic (w.r.t the size of data), or higher computational complexity, and similarly, high memory complexity. Moreover, existing tensor based methods are not always designed with the particular structure of the data in mind. Many of the existing methods use purely algebraic measures as their objective which might not capture the local relations within data. Thus, there is a necessity to develop new models with better computational and memory efficiency, with the particular structure of the data and problem in mind. Finally, as tensors represent the data with more faithfulness to the original structure compared to the vectorization, they also allow coupling of heterogeneous data sources where the underlying physical relationship is known. Still, most of the current work on coupled tensor decompositions does not explore supervised problems.In order to address the issues around computational and storage complexity of tensor basedmachine learning, in Chapter 2, we propose a new tensor train decomposition structure, which is a hybrid between Tucker and Tensor Train decompositions. The proposed structure is used to imple- ment Tensor Train based supervised and unsupervised learning frameworks: linear discriminant analysis (LDA) and graph regularized subspace learning. The algorithm is designed to solve ex- tremal eigenvalue-eigenvector pair computation problems, which can be generalized to many other methods. The supervised framework, Tensor Train Discriminant Analysis (TTDA), is evaluated in a classification task with varying storage complexities with respect to classification accuracy and training time on four different datasets. The unsupervised approach, Graph Regularized TT, is evaluated on a clustering task with respect to clustering quality and training time on various storage complexities. Both frameworks are compared to discriminant analysis algorithms with similar objectives based on Tucker and TT decompositions.In Chapter 3, we present an unsupervised anomaly detection algorithm for spatiotemporaltensor data. The algorithm models the anomaly detection problem as a low-rank plus sparse tensor decomposition problem, where the normal activity is assumed to be low-rank and the anomalies are assumed to be sparse and temporally continuous. We present an extension of this algorithm, where we utilize a graph regularization term in our objective function to preserve the underlying geometry of the original data. Finally, we propose a computationally efficient implementation of this framework by approximating the nuclear norm using graph total variation minimization. The proposed approach is evaluated for both simulated data with varying levels of anomaly strength, length and number of missing entries in the observed tensor as well as urban traffic data. In Chapter 4, we propose a geometric tensor learning framework using product graph structures for tensor completion problem. Instead of purely algebraic measures such as rank, we use graph smoothness constraints that utilize geometric or topological relations within data. We prove the equivalence of a Cartesian graph structure to TT-based graph structure under some conditions. We show empirically, that introducing such relaxations due to the conditions do not deteriorate the recovery performance. We also outline a fully geometric learning method on product graphs for data completion.In Chapter 5, we introduce a supervised learning method for heterogeneous data sources suchas simultaneous EEG and fMRI. The proposed two-stage method first extracts features taking the coupling across modalities into account and then introduces kernelized support tensor machines for classification. We illustrate the advantages of the proposed method on simulated and real classification tasks with small number of training data with high dimensionality.
Show less
- Title
- ASSESSMENT OF CROSS-FREQUENCY PHASE-AMPLITUDE COUPLING IN NEURONAL OSCILLATIONS
- Creator
- Munia, Tamanna Tabassum Khan
- Date
- 2021
- Collection
- Electronic Theses & Dissertations
- Description
-
Oscillatory activity in the brain has been associated with a wide variety of cognitive processes including decision making, feedback processing, and working memory control. The high temporal resolution provided by electroencephalography (EEG) enables the study of variation of oscillatory power and coupling across time. Various forms of neural synchrony across frequency bands have been suggested as the mechanism underlying neural binding. Recently, a considerable amount of work has focused on...
Show moreOscillatory activity in the brain has been associated with a wide variety of cognitive processes including decision making, feedback processing, and working memory control. The high temporal resolution provided by electroencephalography (EEG) enables the study of variation of oscillatory power and coupling across time. Various forms of neural synchrony across frequency bands have been suggested as the mechanism underlying neural binding. Recently, a considerable amount of work has focused on phase-amplitude coupling (PAC)– a form of cross-frequency coupling where the amplitude of a high-frequency signal is modulated by the phase of low-frequency oscillations.The existing methods for assessing PAC have certain limitations which can influence the final PAC estimates and the subsequent neuroscientific findings. These limitations include low frequency resolution, narrowband assumption, and inherent requirement of bandpass filtering. These methods are also limited to quantifying univariate PAC and cannot capture inter-areal cross frequency coupling between different brain regions. Given the availability of multi-channel recordings, a multivariate analysis of phase-amplitude coupling is needed to accurately quantify the coupling across multiple frequencies and brain regions. Moreover, the existing PAC measures are usually stationary in nature, focusing on phase-amplitude modulations within a particular time window or over arbitrary sliding short time windows. Therefore, there is a need for computationally efficient measures that can quantify PAC with a high-frequency resolution, track the variation of PAC with time, both in bivariate and multivariate settings and provide a better insight into the spatially distributed dynamic brain networks across different frequency bands.In this thesis, we introduce a PAC computation technique that aims to overcome some of these drawbacks and extend it to multi-channel settings for quantifying dynamic cross-frequency coupling in the brain. The main contributions of the thesis are threefold. First, we present a novel time frequency based PAC (t-f PAC) measure based on a high-resolution complex time-frequency distribution, known as the Reduced Interference Distribution (RID)-Rihaczek. This t-f PAC measure overcomes the drawbacks associated with filtering by extracting instantaneous phase and amplitude components directly from the t-f distribution and thus provides high resolution PAC estimates. Following the introduction of a complex time-frequency-based high resolution PAC measure, we extend this measure to multi-channel settings to quantify the inter-areal PAC across multiple frequency bands and brain regions. We propose a tensor-based representation of multi-channel PAC based on Higher Order Robust PCA (HoRPCA). The proposed method can identify the significantly coupled brain regions along with the frequency bands that are involved in the observed couplings while accurately discarding the non-significant or spurious couplings. Finally, we introduce a matching pursuit based dynamic PAC (MP-dPAC) measure that allows us to compute PAC from time and frequency localized atoms that best describe the signal and thus capture the temporal variation of PAC using a data-driven approach. We evaluate the performance of the proposed methods on both synthesized and real EEG data collected during a cognitive control-related error processing study. Based on our results, we posit that the proposed multivariate and dynamic PAC measures provide a better insight into understanding the spatial, spectral, and temporal dynamics of cross-frequency phase-amplitude coupling in the brain.
Show less
- Title
- Reducing the number of ultrasound array elements with the matrix pencil method
- Creator
- Sales, Kirk L.
- Date
- 2012
- Collection
- Electronic Theses & Dissertations
- Description
-
Phased arrays are diversely applied with some specific areas including biomedical imaging and therapy, non-destructive testing, radar and sonar. In this thesis, the matrix pencil method is employed to reduce the number of elements in a linear ultrasound phased array. The non-iterative, linear method begins with a specified pressure beam pattern, reduces the dimensionality of the problem, then calculates the element locations and apodization of a reduced array. Computer simulations demonstrate...
Show morePhased arrays are diversely applied with some specific areas including biomedical imaging and therapy, non-destructive testing, radar and sonar. In this thesis, the matrix pencil method is employed to reduce the number of elements in a linear ultrasound phased array. The non-iterative, linear method begins with a specified pressure beam pattern, reduces the dimensionality of the problem, then calculates the element locations and apodization of a reduced array. Computer simulations demonstrate a close comparison between the initial array beam pattern and the reduced array beam pattern for four different linear arrays. The number of elements in a broadside-steered linear array is shown to decrease by approximately 50% with the reduced array beam pattern closely approximating the initial array beam pattern in the far-field. While the method returns a slightly tapered spacing between elements, for the arrays considered, replacing the tapered spacing with a suitably-selected uniform spacing provides very little change in the main beam and low-angle side lobes.
Show less
- Title
- Network reachability : quantification, verification, troubleshooting, and optimization
- Creator
- Khakpour, Amir Reza
- Date
- 2012
- Collection
- Electronic Theses & Dissertations
- Description
-
Quantifying, verifying, troubleshooting, and optimizing the network reachability is essential for network management and network security monitoring as well as various aspects of network auditing, maintenance, and design. Although attempts to model network reachability have been made, feasible solutions for computing, maintaining and optimally designing network reachability have remained unknown. Network reachability control is very critical because, on one hand, reachability errors can cause...
Show moreQuantifying, verifying, troubleshooting, and optimizing the network reachability is essential for network management and network security monitoring as well as various aspects of network auditing, maintenance, and design. Although attempts to model network reachability have been made, feasible solutions for computing, maintaining and optimally designing network reachability have remained unknown. Network reachability control is very critical because, on one hand, reachability errors can cause network security breaches or service outages, leading to millions of dollars of revenue loss for an enterprise network. On the other hand, network operators suffer from lack of tools that thoroughly examine network access control configurations and audit them to avoid such errors. Besides, finding reachability errors is by no means easy. The access control rules, by which network reachability is restricted, are often very complex and manually troubleshooting them is extremely difficult. Hence, having a tool that finds the reachability errors and fix them automatically can be very useful. Furthermore, flawed network reachability design and deployment can degrade the network performance significantly. Thus, it is crucial to have a tool that designs the network configurations such that they have the least performance impact on the enterprise network.In this dissertation, we first present a network reachability model that considers connectionless and connection-oriented transport protocols, stateless and stateful routers/firewalls, static and dynamic NAT, PAT, IP tunneling, etc. We then propose a suite of algorithms for quantifying reachability based on network configurations (mainly access control lists (ACLs)) as well as solutions for querying network reachability. We further extend our algorithms and data structures for detecting reachability errors, pinpointing faulty access control lists, and fixing them automatically and efficiently. Finally, we propose algorithms to place rules on network devices optimally so that they satisfy the networks central access policies. To this end, we define correctness and performance criteria for rule placement and in turn propose cost-based algorithms with adjustable parameters (for the network operators) to place rules such that the correctness and performance criteria are satisfied.We implemented the algorithms in our network reachability tool called Quarnet and conducted experiments on a university network. Experimental results show that the offline computation of reachability matrices takes a few hours and the online processing of a reachability query takes 75 milliseconds on average. We also examine our reachability error detection and correction algorithms on a few real-life networks to examine their performance and ensure that Quarnet is efficient enough to be practically useful. The results indicate that we can find reachability errors in order of minutes and fix them in order of seconds depending on the size of network and number of ACLs. Finally, we added the rule placement suite of algorithms to Quarnet, which can design a network ACL in based on the network central policies in order of tens of minutes for an enterprise network. We compare it with Purdue ACL placement, the state-of-the-art access policy design technique, and explain its pros and cons.
Show less
- Title
- Hardware algorithms for high-speed packet processing
- Creator
- Norige, Eric
- Date
- 2017
- Collection
- Electronic Theses & Dissertations
- Description
-
The networking industry is facing enormous challenges of scaling devices to support theexponential growth of internet traffic as well as increasing number of features being implemented inside the network. Algorithmic hardware improvements to networking componentshave largely been neglected due to the ease of leveraging increased clock frequency and compute power and the risks of implementing complex hardware designs. As clock frequencyslows its growth, algorithmic solutions become important...
Show moreThe networking industry is facing enormous challenges of scaling devices to support theexponential growth of internet traffic as well as increasing number of features being implemented inside the network. Algorithmic hardware improvements to networking componentshave largely been neglected due to the ease of leveraging increased clock frequency and compute power and the risks of implementing complex hardware designs. As clock frequencyslows its growth, algorithmic solutions become important to fill the gap between currentgeneration capability and next generation requirements. This paper presents algorithmicsolutions to networking problems in three domains: Deep Packet Inspection(DPI), firewall(and other) ruleset compression and non-cryptographic hashing. The improvements in DPIare two-pronged: first in the area of application-level protocol field extraction, which allowssecurity devices to precisely identify packet fields for targeted validity checks. By usingcounting automata, we achieve precise parsing of non-regular protocols with small, constantper-flow memory requirements, extracting at rates of up to 30gbps on real traffic in softwarewhile using only 112 bytes of state per flow. The second DPI improvement is on the longstanding regular expression matching problem, where we complete the HFA solution to theDFA state explosion problem with efficient construction algorithms and optimized memorylayout for hardware or software implementation. These methods construct automata toocomplex to be constructed by previous methods in seconds, while being capable of 29gbpsthroughput with an ASIC implementation. Firewall ruleset compression enables more firewall entries to be stored in a fixed capacity pattern matching engine, and can also be usedto reorganize a firewall specification for higher performance software matching. A novelrecursive structure called TUF is given to unify the best known solutions to this problemand suggest future avenues of attack. These algorithms, with little tuning, achieve a 13.7%improvement in compression on large, real-life classifiers, and can achieve the same results asexisting algorithms while running 20 times faster. Finally, non-cryptographic hash functionscan be used for anything from hash tables to track network flows to packet sampling fortraffic characterization. We give a novel approach to generating hardware hash functionsin between the extremes of expensive cryptographic hash functions and low quality linearhash functions. To evaluate these mid-range hash functions properly, we develop new evaluation methods to better distinguish non-cryptographic hash function quality. The hashfunctions described in this paper achieve low-latency, wide hashing with good avalanche anduniversality properties at a much lower cost than existing solutions.
Show less
- Title
- Signal Processing Based Distortion Mitigation in Interferometric Radar Angular Velocity Estimation
- Creator
- Klinefelter, Eric
- Date
- 2021
- Collection
- Electronic Theses & Dissertations
- Description
-
Interferometric angular velocity estimation is a relatively recent radar technique which uses a pair of widely spaced antenna elements and a correlation receiver to directly measure the angular velocity of a target. Traditional radar systems measure range, radial velocity (Doppler), and angle, while angular velocity is typically derived as the time-rate change of the angle measurements. The noise associated with the derived angular velocity estimate is statistically correlated with the angle...
Show moreInterferometric angular velocity estimation is a relatively recent radar technique which uses a pair of widely spaced antenna elements and a correlation receiver to directly measure the angular velocity of a target. Traditional radar systems measure range, radial velocity (Doppler), and angle, while angular velocity is typically derived as the time-rate change of the angle measurements. The noise associated with the derived angular velocity estimate is statistically correlated with the angle measurements, and thus provides no additional information to traditional state space trackers. Interferometric angular velocity estimation, on the other hand, provides an independent measurement, thus forming a basis in R2 for both position and velocity.While promising results have been presented for single target interferometric angular velocity estimation, there is a known issue which arises when multiple targets are present. The ideal interferometric response with multiple targets would contain only the mixing product between like targets across the antenna responses, yet instead, the mixing product between all targets is generated, resulting in unwanted `cross-terms' or intermodulation distortion. To date, various hardware based methods have been presented, which are effective, though they tend to require an increased number of antenna elements, a larger physical system baseline, or signals with wide bandwidths. Presented here are novel methods for signal processing based interferometric angular velocity estimation distortion mitigation, which can be performed with only a single antenna pair and traditional continuous-wave or frequency-modulated continuous wave signals.In this work, two classes of distortion mitigation methods are described: model-based and response decomposition. Model-based methods use a learned or analytic model with traditional non-linear optimization techniques to arrive at angular velocity estimates based on the complete interferometric signal response. Response decomposition methods, on the other hand, aim to decompose the individual antenna responses into separate responses pertaining to each target, then associate like targets between antenna responses. By performing the correlation in this manner, the cross-terms, which typically corrupt the interferometric response, are mitigated. It was found that due to the quadratic scaling of distortion terms, model-based methods become exceedingly difficult as the number of targets grows large. Thus, the method of response decomposition is selected and results on measured radar signals are presented. For this, a custom single-board millimeter-wave interferometric radar was developed, and angular velocity measurements were performed in an enclosed environment consisting of two robotic targets. A set of experiments was designed to highlight easy, medium, and difficult cases for the response decomposition algorithm, and results are presented herein.
Show less
- Title
- Novel Depth Representations for Depth Completion with Application in 3D Object Detection
- Creator
- Imran, Saif Muhammad
- Date
- 2022
- Collection
- Electronic Theses & Dissertations
- Description
-
Depth completion refers to interpolating a dense, regular depth grid from sparse and irregularly sampled depth values, often guided by high-resolution color imagery. The primary goal of depth completion is to estimate depth. In practice methods are trained by minimizing an error between predicted dense depth and ground-truth depth, and are evaluated by how well they minimize this error. Here we identify a second goal which is to avoid smearing depth across depth discontinuities. This second...
Show moreDepth completion refers to interpolating a dense, regular depth grid from sparse and irregularly sampled depth values, often guided by high-resolution color imagery. The primary goal of depth completion is to estimate depth. In practice methods are trained by minimizing an error between predicted dense depth and ground-truth depth, and are evaluated by how well they minimize this error. Here we identify a second goal which is to avoid smearing depth across depth discontinuities. This second goal is important because it can improve downstream applications of depth completion such as object detection and pose estimation. However, we also show that the goal of minimizing error can conflict with the goal of eliminating depth smearing.In this thesis, we propose two novel representations of depths that can encode depth discontinuity across object surfaces by allowing multiple depth estimation in the spatial domain. In order to learn these new representations, we propose carefully designed loss functions and show their effectiveness in deep neural network learning. We show how our representations can avoid inter-object depth mixing and also beat state of the art metrics for depth completion. The quality of ground-truth depth in real-world depth completion problems is another key challenge for learning and accurate evaluation of methods. Ground truth depth created from semi-automatic methods suffers from sparse sampling and errors at object boundaries. We show that the combination of these errors and the commonly used evaluation measure has promoted solutions that mix depths across boundaries in current methods. The thesis proposes alternate depth completion performance measures that reduce preference for mixed depths and promote sharp boundaries.The thesis also investigates whether additional points from depth completion methods can help in a challenging and high-level perception problem; 3D object detection. It shows the effect of different depth noises originated from depth estimates on detection performances and proposes some effective ways to reduce noise in the estimate and overcome architecture limitations. The method is demonstrated on both real-world and synthetic datasets.
Show less
- Title
- LIDAR AND CAMERA CALIBRATION USING A MOUNTED SPHERE
- Creator
- Li, Jiajia
- Date
- 2020
- Collection
- Electronic Theses & Dissertations
- Description
-
Extrinsic calibration between lidar and camera sensors is needed for multi-modal sensor data fusion. However, obtaining precise extrinsic calibration can be tedious, computationally expensive, or involve elaborate apparatus. This thesis proposes a simple, fast, and robust method performing extrinsic calibration between a camera and lidar. The only required calibration target is a hand-held colored sphere mounted on a whiteboard. The convolutional neural networks are developed to automatically...
Show moreExtrinsic calibration between lidar and camera sensors is needed for multi-modal sensor data fusion. However, obtaining precise extrinsic calibration can be tedious, computationally expensive, or involve elaborate apparatus. This thesis proposes a simple, fast, and robust method performing extrinsic calibration between a camera and lidar. The only required calibration target is a hand-held colored sphere mounted on a whiteboard. The convolutional neural networks are developed to automatically localize the sphere relative to the camera and the lidar. Then using the localization covariance models, the relative pose between the camera and lidar is derived. To evaluate the accuracy of our method, we record image and lidar data of a sphere at a set of known grid positions by using two rails mounted on a wall. The accurate calibration results are demonstrated by projecting the grid centers into the camera image plane and finding the error between these points and the hand-labeled sphere centers.
Show less
- Title
- Efficient and secure system design in wireless communications
- Creator
- Song, Tianlong
- Date
- 2016
- Collection
- Electronic Theses & Dissertations
- Description
-
Efficient and secure information transmission lies in the core part of wireless system design and networking. Comparing with its wired counterpart, in wireless communications, the total available spectrum has to be shared by different services. Moreover, wireless transmission is more vulnerable to unauthorized detection, eavesdropping and hostile jamming due to the lack of a protective physical boundary.Today, the two most representative highly efficient communication systems are CDMA (used...
Show moreEfficient and secure information transmission lies in the core part of wireless system design and networking. Comparing with its wired counterpart, in wireless communications, the total available spectrum has to be shared by different services. Moreover, wireless transmission is more vulnerable to unauthorized detection, eavesdropping and hostile jamming due to the lack of a protective physical boundary.Today, the two most representative highly efficient communication systems are CDMA (used in 3G) and OFDM (used in 4G), and OFDM is regarded as the most efficient system. This dissertation will focus on two topics: (1) Explore more spectrally efficient system design based on the 4G OFDM scheme; (2) Investigate robust wireless system design and conduct capacity analysis under different jamming scenarios. The main results are outlined as follows.First, we develop two spectrally efficient OFDM-based multi-carrier transmission schemes: one with message-driven idle subcarriers (MC-MDIS), and the other with message-driven strengthened subcarriers (MC-MDSS). The basic idea in MC-MDIS is to carry part of the information, named carrier bits, through idle subcarrier selection while transmitting the ordinary bits regularly on all the other subcarriers. When the number of subcarriers is much larger than the adopted constellation size, higher spectral and power efficiency can be achieved comparing with OFDM. In MC-MDSS, the idle subcarriers are replaced by strengthened ones, which, unlike idle ones, can carry both carrier bits and ordinary bits. Therefore, MC-MDSS achieves even higher spectral efficiency than MC-MDIS.Second, we consider jamming-resistant OFDM system design under full-band disguised jamming, where the jamming symbols are taken from the same constellation as the information symbols over each subcarrier. It is shown that due to the symmetricity between the authorized signal and jamming, the BER of the traditional OFDM system is lower bounded by a modulation specific constant. We develop an optimal precoding scheme, which minimizes the BER of OFDM systems under full-band disguised jamming. It is shown that the most efficient way to combat full-band disguised jamming is to concentrate the total available power and distribute it uniformly over a particular number of subcarriers instead of the entire spectrum. The precoding scheme is further randomized to reinforce the system jamming resistance.Third, we consider jamming mitigation for CDMA systems under disguised jamming, where the jammer generates a fake signal using the same spreading code, constellation and pulse shaping filter as that of the authorized signal. Again, due to the symmetricity between the authorized signal and jamming, the receiver cannot really distinguish the authorized signal from jamming, leading to complete communication failure. In this research, instead of using conventional scrambling codes, we apply advanced encryption standard (AES) to generate the security-enhanced scrambling codes. Theoretical analysis shows that: the capacity of conventional CDMA systems without secure scrambling under disguised jamming is actually zero, while the capacity can be significantly increased by secure scrambling.Finally, we consider a game between a power-limited authorized user and a power-limited jammer, who operate independently over the same spectrum consisting of multiple bands. The strategic decision-making is modeled as a two-party zero-sum game, where the payoff function is the capacity that can be achieved by the authorized user in presence of the jammer. We first investigate the game under AWGN channels. It is found that: either for the authorized user to maximize its capacity, or for the jammer to minimize the capacity of the authorized user, the best strategy is to distribute the power uniformly over all the available spectrum. Then, we consider fading channels. We characterize the dynamic relationship between the optimal signal power allocation and the optimal jamming power allocation, and propose an efficient two-step water pouring algorithm to calculate them.
Show less
- Title
- High-dimensional learning from random projections of data through regularization and diversification
- Creator
- Aghagolzadeh, Mohammad
- Date
- 2015
- Collection
- Electronic Theses & Dissertations
- Description
-
Random signal measurement, in the form of random projections of signal vectors, extends the traditional point-wise and periodic schemes for signal sampling. In particular, the well-known problem of sensing sparse signals from linear measurements, also known as Compressed Sensing (CS), has promoted the utility of random projections. Meanwhile, many signal processing and learning problems that involve parametric estimation do not consist of sparsity constraints in their original forms. With the...
Show moreRandom signal measurement, in the form of random projections of signal vectors, extends the traditional point-wise and periodic schemes for signal sampling. In particular, the well-known problem of sensing sparse signals from linear measurements, also known as Compressed Sensing (CS), has promoted the utility of random projections. Meanwhile, many signal processing and learning problems that involve parametric estimation do not consist of sparsity constraints in their original forms. With the increasing popularity of random measurements, it is crucial to study the generic estimation performance under the random measurement model. In this thesis, we consider two specific learning problems (named below) and present the following two generic approaches for improving the estimation accuracy: 1) by adding relevant constraints to the parameter vectors and 2) by diversification of the random measurements to achieve fast decaying tail bounds for the empirical risk function.The first problem we consider is Dictionary Learning (DL). Dictionaries are extensions of vector bases that are specifically tailored for sparse signal representation. DL has become increasingly popular for sparse modeling of natural images as well as sound and biological signals, just to name a few. Empirical studies have shown that typical DL algorithms for imaging applications are relatively robust with respect to missing pixels in the training data. However, DL from random projections of data corresponds to an ill-posed problem and is not well-studied. Existing efforts are limited to learning structured dictionaries or dictionaries for structured sparse representations to make the problem tractable. The main motivation for considering this problem is to generate an adaptive framework for CS of signals that are not sparse in the signal domain. In fact, this problem has been referred to as 'blind CS' since the optimal basis is subject to estimation during CS recovery. Our initial approach, similar to some of the existing efforts, involves adding structural constraints on the dictionary to incorporate sparse and autoregressive models. More importantly, our results and analysis reveal that DL from random projections of data, in its unconstrained form, can still be accurate given that measurements satisfy the diversity constraints defined later.The second problem that we consider is high-dimensional signal classification. Prior efforts have shown that projecting high-dimensional and redundant signal vectors onto random low-dimensional subspaces presents an efficient alternative to traditional feature extraction tools such as the principle component analysis. Hence, aside from the CS application, random measurements present an efficient sampling method for learning classifiers, eliminating the need for recording and processing high-dimensional signals while most of the recorded data is discarded during feature extraction. We work with the Support Vector Machine (SVM) classifiers that are learned in the high-dimensional ambient signal space using random projections of the training data. Our results indicate that the classifier accuracy can be significantly improved by diversification of the random measurements.
Show less