You are here
Search results
(1 - 20 of 30)
Pages
- Title
- Unconstrained 3D face reconstruction from photo collections
- Creator
- Roth, Joseph (Software engineer)
- Date
- 2016
- Collection
- Electronic Theses & Dissertations
- Description
-
This thesis presents a novel approach for 3D face reconstruction from unconstrained photo collections. An unconstrained photo collection is a set of face images captured under an unknown and diverse variation of poses, expressions, and illuminations. The output of the proposed algorithm is a true 3D face surface model represented as a watertight triangulated surface with albedo data colloquially referred to as texture information. Reconstructing a 3D understanding of a face based on 2D input...
Show moreThis thesis presents a novel approach for 3D face reconstruction from unconstrained photo collections. An unconstrained photo collection is a set of face images captured under an unknown and diverse variation of poses, expressions, and illuminations. The output of the proposed algorithm is a true 3D face surface model represented as a watertight triangulated surface with albedo data colloquially referred to as texture information. Reconstructing a 3D understanding of a face based on 2D input is a long-standing computer vision problem. Traditional photometric stereo-based reconstruction techniques work on aligned 2D images and produce a 2.5D depth map reconstruction. We extend face reconstruction to work with a true 3D model, allowing us to enjoy the benefits of using images from all poses, up to and including profiles. To use a 3D model, we propose a novel normal field-based Laplace editing technique which allows us to deform a triangulated mesh to match the observed surface normals. Unlike prior work that require large photo collections, we formulate an approach to adapt to photo collections with few images of potentially poor quality. We achieve this through incorporating prior knowledge about face shape by fitting a 3D Morphable Model to form a personalized template before using a novel analysis-by-synthesis photometric stereo formulation to complete the fine face details. A structural similarity-based quality measure allows evaluation in the absence of ground truth 3D scans. Superior large-scale experimental results are reported on Internet, synthetic, and personal photo collections.
Show less
- Title
- Tracking single-units in chronic neural recordings for brain machine interface applications
- Creator
- Eleryan, Ahmed Ibrahim
- Date
- 2013
- Collection
- Electronic Theses & Dissertations
- Description
-
Ensemble recording of multiple single-unit activity has been used to study the mechanisms of neural population coding over prolonged periods of time, and to perform reliable neural decoding in neuroprosthetic motor control applications. However, there are still many challenges towards achieving reliable stable single-units recordings. One primary challenge is the variability in spike waveform features and firing characteristics of single units recorded using chronically implanted...
Show moreEnsemble recording of multiple single-unit activity has been used to study the mechanisms of neural population coding over prolonged periods of time, and to perform reliable neural decoding in neuroprosthetic motor control applications. However, there are still many challenges towards achieving reliable stable single-units recordings. One primary challenge is the variability in spike waveform features and firing characteristics of single units recorded using chronically implanted microelectrodes, making it challenging to ascertain the identity of the recorded neurons across days. In this study, I present a fast and efficient algorithm that tracks multiple single-units recorded in non-human primates performing brain control of a robotic limb, based on features extracted from units' average waveforms and interspike intervals histograms. The algorithm requires a relatively short recording duration to perform the analysis and can be applied at the start of each recording session without requiring the subject to be engaged in a behavioral task. The algorithm achieves a classification accuracy of up to 90% compared to manual tracking. I also explore using the algorithm to develop an automated technique for unit selection to perform reliable decoding of movement parameters from neural activity.
Show less
- Title
- Theory, synthesis and implementation of current-mode CMOS piecewise-linear circuits using margin propagation
- Creator
- Gu, Ming
- Date
- 2012
- Collection
- Electronic Theses & Dissertations
- Description
-
Achieving high energy-efficiency is a key requirement for many emerging smart sensors and portable computing systems. While digital signal processing (DSP) has been the de-facto technique for implementing ultra-low power systems, analog signal processing (ASP) provides an attractive and alternate approach that can not only achieve high energy efficiency but also high computational density. Conventional ASP techniques are based on a top-down design approach, where proven mathematical...
Show moreAchieving high energy-efficiency is a key requirement for many emerging smart sensors and portable computing systems. While digital signal processing (DSP) has been the de-facto technique for implementing ultra-low power systems, analog signal processing (ASP) provides an attractive and alternate approach that can not only achieve high energy efficiency but also high computational density. Conventional ASP techniques are based on a top-down design approach, where proven mathematical principles and related algorithms are mapped and emulated using computational primitives inherent in the device physics. An example being the translinear principle, which is the state-of-the-art ASP technique, that uses the exponential current-to-voltage characteristics for designing ultra-low-power analog processors. However, elegant formulations could result from a bottom-up approach where device and bias independent computational primitives (e.g. current and charge conservation principles) are used for designing "approximate" analog signal processors. The hypothesis of this proposal is that many signal processing algorithms exhibit an inherent calibration ability due to which their performance remains unaffected by the use of "approximate" analog computing techniques. In this research, we investigate the theory, synthesis and implementation of high performance analog processors using a novel piecewise-linear (PWL) approximation algorithm called margin propagation (MP). MP principle utilizes only basic conservation laws of physical quantities (current, charge, mass, energy) for computing and therefore is scalable across devices (silicon, MEMS, microfluidics). However, there are additional advantages of MP-based processors when implemented using CMOS current-mode circuits, which includes: 1) the operation of the MP processor requires only addition, subtraction and threshold operations and hence is independent of transistor biasing (weak, moderate and strong inversion) and robust to variations in environmental conditions (e.g. temperature); and 2) improved dynamic range and faster convergence as compared to the translinear implementations. We verify our hypothesis using two analog signal processing applications: (a) design of high-performance analog low-density parity check (LDPC) decoders for applications in sensor networks; and (b) design of ultra-low-power analog support vector machines (SVM) for smart sensors. Our results demonstrate that an algorithmic framework for designing margin propagation (MP) based LDPC decoders can be used to trade-off its BER performance with its energy efficiency, making the design attractive for applications with adaptive energy-BER constraints. We have verified this trade-off using an analog current-mode implementation of an MP-based (32,8) LDPC decoder. Measured results from prototypes fabricated in a 0.5 μm CMOS process show that the BER performance of the MP-based decoder outperforms a benchmark state-of-the-art min-sum decoder at SNR levels greater than 3.5 dB and can achieve energy efficiencies greater than 100pJ/bit at a throughput of 12.8 Mbps. In the second part of this study, MP principle is used for designing an energy-scalable support vector machine (SVM) whose power and speed requirements can be configured dynamically without any degradation in performance. We have verified the energy-scaling property using a current-mode implementation of an SVM operating with 8 dimensional feature vectors and 18 support vectors. The prototype fabricated in a 0.5μm CMOS process has integrated an array of floating gate transistors that serve as storage for up to 2052 SVM parameters. The SVM prototype also integrates novel circuits that have been designed for interfacing with an external digital processor. This includes a novel current-input current-output logarithmic amplifier circuit that can achieve a dynamic range of 120dB while consuming nanowatts of power. Another novel circuit includes a varactor based temperature compensated floating-gate memory that demonstrates a superior programming range than other temperature compensated floating-gate memories.
Show less
- Title
- TENSOR LEARNING WITH STRUCTURE, GEOMETRY AND MULTI-MODALITY
- Creator
- Sofuoglu, Seyyid Emre
- Date
- 2022
- Collection
- Electronic Theses & Dissertations
- Description
-
With the advances in sensing and data acquisition technology, it is now possible to collect datafrom different modalities and sources simultaneously. Most of these data are multi-dimensional in nature and can be represented by multiway arrays known as tensors. For instance, a color image is a third-order tensor defined by two indices for spatial variables and one index for color mode. Some other examples include color video, medical imaging such as EEG and fMRI, spatiotemporal data...
Show moreWith the advances in sensing and data acquisition technology, it is now possible to collect datafrom different modalities and sources simultaneously. Most of these data are multi-dimensional in nature and can be represented by multiway arrays known as tensors. For instance, a color image is a third-order tensor defined by two indices for spatial variables and one index for color mode. Some other examples include color video, medical imaging such as EEG and fMRI, spatiotemporal data encountered in urban traffic monitoring, etc.In the past two decades, tensors have become ubiquitous in signal processing, statistics andcomputer science. Traditional unsupervised and supervised learning methods developed for one- dimensional signals do not translate well to higher order data structures as they get computationally prohibitive with increasing dimensionalities. Vectorizing high dimensional inputs creates problems in nearly all machine learning tasks due to exponentially increasing dimensionality, distortion of data structure and the difficulty of obtaining sufficiently large training sample size.In this thesis, we develop tensor-based approaches to various machine learning tasks. Existingtensor based unsupervised and supervised learning algorithms extend many well-known algorithms, e.g. 2-D component analysis, support vector machines and linear discriminant analysis, with better performance and lower computational and memory costs. Most of these methods rely on Tucker decomposition which has exponential storage complexity requirements; CANDECOMP-PARAFAC (CP) based methods which might not have a solution; or Tensor Train (TT) based solutions which suffer from exponentially increasing ranks. Many tensor based methods have quadratic (w.r.t the size of data), or higher computational complexity, and similarly, high memory complexity. Moreover, existing tensor based methods are not always designed with the particular structure of the data in mind. Many of the existing methods use purely algebraic measures as their objective which might not capture the local relations within data. Thus, there is a necessity to develop new models with better computational and memory efficiency, with the particular structure of the data and problem in mind. Finally, as tensors represent the data with more faithfulness to the original structure compared to the vectorization, they also allow coupling of heterogeneous data sources where the underlying physical relationship is known. Still, most of the current work on coupled tensor decompositions does not explore supervised problems.In order to address the issues around computational and storage complexity of tensor basedmachine learning, in Chapter 2, we propose a new tensor train decomposition structure, which is a hybrid between Tucker and Tensor Train decompositions. The proposed structure is used to imple- ment Tensor Train based supervised and unsupervised learning frameworks: linear discriminant analysis (LDA) and graph regularized subspace learning. The algorithm is designed to solve ex- tremal eigenvalue-eigenvector pair computation problems, which can be generalized to many other methods. The supervised framework, Tensor Train Discriminant Analysis (TTDA), is evaluated in a classification task with varying storage complexities with respect to classification accuracy and training time on four different datasets. The unsupervised approach, Graph Regularized TT, is evaluated on a clustering task with respect to clustering quality and training time on various storage complexities. Both frameworks are compared to discriminant analysis algorithms with similar objectives based on Tucker and TT decompositions.In Chapter 3, we present an unsupervised anomaly detection algorithm for spatiotemporaltensor data. The algorithm models the anomaly detection problem as a low-rank plus sparse tensor decomposition problem, where the normal activity is assumed to be low-rank and the anomalies are assumed to be sparse and temporally continuous. We present an extension of this algorithm, where we utilize a graph regularization term in our objective function to preserve the underlying geometry of the original data. Finally, we propose a computationally efficient implementation of this framework by approximating the nuclear norm using graph total variation minimization. The proposed approach is evaluated for both simulated data with varying levels of anomaly strength, length and number of missing entries in the observed tensor as well as urban traffic data. In Chapter 4, we propose a geometric tensor learning framework using product graph structures for tensor completion problem. Instead of purely algebraic measures such as rank, we use graph smoothness constraints that utilize geometric or topological relations within data. We prove the equivalence of a Cartesian graph structure to TT-based graph structure under some conditions. We show empirically, that introducing such relaxations due to the conditions do not deteriorate the recovery performance. We also outline a fully geometric learning method on product graphs for data completion.In Chapter 5, we introduce a supervised learning method for heterogeneous data sources suchas simultaneous EEG and fMRI. The proposed two-stage method first extracts features taking the coupling across modalities into account and then introduces kernelized support tensor machines for classification. We illustrate the advantages of the proposed method on simulated and real classification tasks with small number of training data with high dimensionality.
Show less
- Title
- Stochastic modeling of routing protocols for cognitive radio networks
- Creator
- Soltani, Soroor
- Date
- 2013
- Collection
- Electronic Theses & Dissertations
- Description
-
Cognitive radios are expected torevolutionize wireless networking because of their ability tosense, manage and share the mobile available spectrum.Efficient utilization of the available spectrum could be significantly improved by incorporating different cognitive radio based networks. Challenges are involved in utilizing the cognitive radios in a network, most of which rise from the dynamic nature of available spectrum that is not present in traditional wireless networks. The set of available...
Show moreCognitive radios are expected torevolutionize wireless networking because of their ability tosense, manage and share the mobile available spectrum.Efficient utilization of the available spectrum could be significantly improved by incorporating different cognitive radio based networks. Challenges are involved in utilizing the cognitive radios in a network, most of which rise from the dynamic nature of available spectrum that is not present in traditional wireless networks. The set of available spectrum blocks(channels) changes randomly with the arrival and departure of the users licensed to a specific spectrum band. These users are known as primary users. If a band is used by aprimary user, the cognitive radio alters its transmission power level ormodulation scheme to change its transmission range and switches to another channel.In traditional wireless networks, a link is stable if it is less prone to interference. In cognitive radio networks, however, a link that is interference free might break due to the arrival of its primary user. Therefore, links' stability forms a stochastic process with OFF and ON states; ON, if the primary user is absent. Evidently, traditional network protocols fail in this environment. New sets of protocols are needed in each layer to cope with the stochastic dynamics of cognitive radio networks.In this dissertation we present a comprehensive stochastic framework and a decision theory based model for the problem of routing packets from a source to a destination in a cognitive radio network. We begin by introducing two probability distributions called ArgMax and ArgMin for probabilistic channel selection mechanisms, routing, and MAC protocols. The ArgMax probability distribution locates the most stable link from a set of available links. Conversely, ArgMin identifies the least stable link. ArgMax and ArgMin together provide valuable information on the diversity of the stability of available links in a spectrum band. Next, considering the stochastic arrival of primary users, we model the transition of packets from one hop to the other by a Semi-Markov process and develop a Primary Spread Aware Routing Protocol (PSARP) that learns the dynamics of the environment and adapts its routing decision accordingly. Further, we use a decision theory framework. A utility function is designed to capture the effect of spectrum measurement, fluctuation of bandwidth availability and path quality. A node cognitively decides its best candidate among its neighbors by utilizing a decision tree. Each branch of the tree is quantified by the utility function and a posterior probability distribution, constructed using ArgMax probability distribution, which predicts the suitability of available neighbors. In DTCR (Decision Tree Cognitive Routing), nodes learn their operational environment and adapt their decision making accordingly. We extend the Decision tree modeling to translate video routing in a dynamic cognitive radio network into a decision theory problem. Then terminal analysis backward induction is used to produce our routing scheme that improves the peak signal-to-noise ratio of the received video.We show through this dissertation that by acknowledging the stochastic property of the cognitive radio networks' environment and constructing strategies using the statistical and mathematical tools that deal with such uncertainties, the utilization of these networks will greatly improve.
Show less
- Title
- Smartphone-based sensing systems for data-intensive applications
- Creator
- Moazzami, Mohammad-Mahdi
- Date
- 2017
- Collection
- Electronic Theses & Dissertations
- Description
-
"Supported by advanced sensing capabilities, increasing computational resources and the advances in Artificial Intelligence, smartphones have become our virtual companions in our daily life. An average modern smartphone is capable of handling a wide range of tasks including navigation, advanced image processing, speech processing, cross app data processing and etc. The key facet that is common in all of these applications is the data intensive computation. In this dissertation we have taken...
Show more"Supported by advanced sensing capabilities, increasing computational resources and the advances in Artificial Intelligence, smartphones have become our virtual companions in our daily life. An average modern smartphone is capable of handling a wide range of tasks including navigation, advanced image processing, speech processing, cross app data processing and etc. The key facet that is common in all of these applications is the data intensive computation. In this dissertation we have taken steps towards the realization of the vision that makes the smartphone truly a platform for data intensive computations by proposing frameworks, applications and algorithmic solutions. We followed a data-driven approach to the system design. To this end, several challenges must be addressed before smartphones can be used as a system platform for data-intensive applications. The major challenge addressed in this dissertation include high power consumption, high computation cost in advance machine learning algorithms, lack of real-time functionalities, lack of embedded programming support, heterogeneity in the apps, communication interfaces and lack of customized data processing libraries. The contribution of this dissertation can be summarized as follows. We present the design, implementation and evaluation of the ORBIT framework, which represents the first system that combines the design requirements of a machine learning system and sensing system together at the same time. We ported for the first time off-the-shelf machine learning algorithms for real-time sensor data processing to smartphone devices. We highlighted how machine learning on smartphones comes with severe costs that need to be mitigated in order to make smartphones capable of real-time data-intensive processing. From application perspective we present SPOT. SPOT aims to address some of the challenges discovered in mobile-based smart-home systems. These challenges prevent us from achieving the promises of smart-homes due to heterogeneity in different aspects of smart devices and the underlining systems. We face the following major heterogeneities in building smart-homes:: (i) Diverse appliance control apps (ii) Communication interface, (iii) Programming abstraction. SPOT makes the heterogeneous characteristics of smart appliances transparent, and by that it minimizes the burden of home automation application developers and the efforts of users who would otherwise have to deal with appliance-specific apps and control interfaces. From algorithmic perspective we introduce two systems in the smartphone-based deep learning area: Deep-Crowd-Label and Deep-Partition. Deep neural models are both computationally and memory intensive, making them difficult to deploy on mobile applications with limited hardware resources. On the other hand, they are the most advanced machine learning algorithms suitable for real-time sensing applications used in the wild. Deep-Partition is an optimization-based partitioning meta-algorithm featuring a tiered architecture for smartphone and the back-end cloud. Deep-Partition provides a profile-based model partitioning allowing it to intelligently execute the Deep Learning algorithms among the tiers to minimize the smartphone power consumption by minimizing the deep models feed-forward latency. Deep-Crowd-Label is prototyped for semantically labeling user's location. It is a crowd-assisted algorithm that uses crowd-sourcing in both training and inference time. It builds deep convolutional neural models using crowd-sensed images to detect the context (label) of indoor locations. It features domain adaptation and model extension via transfer learning to efficiently build deep models for image labeling. The work presented in this dissertation covers three major facets of data-driven and compute-intensive smartphone-based systems: platforms, applications and algorithms; and helps to spurs new areas of research and opens up new directions in mobile computing research."--Pages ii-iii.
Show less
- Title
- Signal Processing Based Distortion Mitigation in Interferometric Radar Angular Velocity Estimation
- Creator
- Klinefelter, Eric
- Date
- 2021
- Collection
- Electronic Theses & Dissertations
- Description
-
Interferometric angular velocity estimation is a relatively recent radar technique which uses a pair of widely spaced antenna elements and a correlation receiver to directly measure the angular velocity of a target. Traditional radar systems measure range, radial velocity (Doppler), and angle, while angular velocity is typically derived as the time-rate change of the angle measurements. The noise associated with the derived angular velocity estimate is statistically correlated with the angle...
Show moreInterferometric angular velocity estimation is a relatively recent radar technique which uses a pair of widely spaced antenna elements and a correlation receiver to directly measure the angular velocity of a target. Traditional radar systems measure range, radial velocity (Doppler), and angle, while angular velocity is typically derived as the time-rate change of the angle measurements. The noise associated with the derived angular velocity estimate is statistically correlated with the angle measurements, and thus provides no additional information to traditional state space trackers. Interferometric angular velocity estimation, on the other hand, provides an independent measurement, thus forming a basis in R2 for both position and velocity.While promising results have been presented for single target interferometric angular velocity estimation, there is a known issue which arises when multiple targets are present. The ideal interferometric response with multiple targets would contain only the mixing product between like targets across the antenna responses, yet instead, the mixing product between all targets is generated, resulting in unwanted `cross-terms' or intermodulation distortion. To date, various hardware based methods have been presented, which are effective, though they tend to require an increased number of antenna elements, a larger physical system baseline, or signals with wide bandwidths. Presented here are novel methods for signal processing based interferometric angular velocity estimation distortion mitigation, which can be performed with only a single antenna pair and traditional continuous-wave or frequency-modulated continuous wave signals.In this work, two classes of distortion mitigation methods are described: model-based and response decomposition. Model-based methods use a learned or analytic model with traditional non-linear optimization techniques to arrive at angular velocity estimates based on the complete interferometric signal response. Response decomposition methods, on the other hand, aim to decompose the individual antenna responses into separate responses pertaining to each target, then associate like targets between antenna responses. By performing the correlation in this manner, the cross-terms, which typically corrupt the interferometric response, are mitigated. It was found that due to the quadratic scaling of distortion terms, model-based methods become exceedingly difficult as the number of targets grows large. Thus, the method of response decomposition is selected and results on measured radar signals are presented. For this, a custom single-board millimeter-wave interferometric radar was developed, and angular velocity measurements were performed in an enclosed environment consisting of two robotic targets. A set of experiments was designed to highlight easy, medium, and difficult cases for the response decomposition algorithm, and results are presented herein.
Show less
- Title
- Scalable pulsed mode computation architecture using integrate and fire structure based on margin propagation
- Creator
- Hindo, Thamira
- Date
- 2014
- Collection
- Electronic Theses & Dissertations
- Description
-
Neuromorphic computing architectures mimic the brain to implement efficient computations for sensory applications in a different way from that of the traditional Von Neumann architecture. The goal of neuromorphic computing systems is to implement sensory devices and systems that operate as efficiently as their biological equivalents. Neuromorphic computing consists of several potential components including parallel processing instead of synchronous processing, hybrid (pulse) computation...
Show moreNeuromorphic computing architectures mimic the brain to implement efficient computations for sensory applications in a different way from that of the traditional Von Neumann architecture. The goal of neuromorphic computing systems is to implement sensory devices and systems that operate as efficiently as their biological equivalents. Neuromorphic computing consists of several potential components including parallel processing instead of synchronous processing, hybrid (pulse) computation instead of digital computation, neuron models as a basic core of the processing instead of the arithmetic logic units, and analog VLSI design instead of digital VLSI design. In this work a new neuromorphic computing architecture is proposed and investigated for the implementation of algorithms based on using the pulsed mode with a neuron-based circuit.The proposed architecture goal is to implement approximate non-linear functions that are important components of signal processing algorithms. Some of the most important signal processing algorithms are those that mimic biological systems such as hearing, sight and touch. The designed architecture is pulse mode and it maps the functions into an algorithm called margin propagation. The designed structure is a special network of integrate-and-fire neuron-based circuits that implement the margin propagation algorithm using integration and threshold operations embedded in the transfer function of the neuron model. The integrate-and-fire neuron units in the network are connected together through excitatory and inhibitory paths to impose constraints on the network firing-rate. The advantages of the pulse-based, integrate-and-fire margin propagation (IFMP) algorithmic unit are to implement complex non-linear and dynamic programming functions in a scalable way; to implement functions using cascaded design in parallel or serial architecture; to implement the modules in low power and small size circuits of analog VLSI; and to achieve a wide dynamic range since the input parameters of IFMP module are mapped in the logarithmic domain.The newly proposed IFMP algorithmic unit is investigated both on a theoretically basis and an experimental performance basis. The IFMP algorithmic unit is implemented with a low power analog circuit. The circuit is simulated using computer aided design tools and it is fabricated in a 0.5 micron CMOS process. The hardware performance of the fabricated IFMP algorithmic architecture is also measured. The application of the IFMP algorithmic architecture is investigate for three signal processing algorithms including sequence recognition, trace recognition using hidden Markov model and binary classification using a support vector machine. Additionally, the IFMP architecture is investigated for the application of the winner-take-all algorithm, which is important for hearing, sight and touch sensor systems.
Show less
- Title
- Safe Control Design for Uncertain Systems
- Creator
- Marvi, Zahra
- Date
- 2021
- Collection
- Electronic Theses & Dissertations
- Description
-
This dissertation investigates the problem of safe control design for systems under model and environmental uncertainty. Reinforcement learning (RL) provides an interactive learning framework in which the optimal controller is sequentially derived based on instantaneous reward. Although powerful, safety consideration is a barrier to the wide deployment of RL algorithms in practice. To overcome this problem, we proposed an iterative safe off-policy RL algorithm. The cost function that encodes...
Show moreThis dissertation investigates the problem of safe control design for systems under model and environmental uncertainty. Reinforcement learning (RL) provides an interactive learning framework in which the optimal controller is sequentially derived based on instantaneous reward. Although powerful, safety consideration is a barrier to the wide deployment of RL algorithms in practice. To overcome this problem, we proposed an iterative safe off-policy RL algorithm. The cost function that encodes the designer's objectives is augmented with a control barrier function (CBF) to ensure safety and optimality. The proposed formulation provides a look-ahead and proactive safety planning, in which the safety is planned and optimized along with the performance to minimize the intervention with the optimal controller. Extensive safety and stability analysis is provided and the proposed method is implemented using the off-policy algorithm without requiring complete knowledge about the system dynamics. This line of research is then further extended to have a safety and stability guarantee even during the data collection and exploration phases in which random noisy inputs are applied to the system. However, satisfying the safety of actions when little is known about the system dynamics is a daunting challenge. We present a novel RL scheme that ensures the safety and stability of the linear systems during the exploration and exploitation phases. This is obtained by having a concurrent model learning and control, in which an efficient learning scheme is employed to prescribe the learning behavior. This characteristic is then employed to apply only safe and stabilizing controllers to the system. First, the prescribed errors are employed in a novel adaptive robustified control barrier function (AR-CBF) which guarantees that the states of the system remain in the safe set even when the learning is incomplete. Therefore, the noisy input in the exploratory data collection phase and the optimal controller in the exploitation phase are minimally altered such that the AR-CBF criterion is satisfied and, therefore, safety is guaranteed in both phases. It is shown that under the proposed prescribed RL framework, the model learning error is a vanishing perturbation to the original system. Therefore, a stability guarantee is also provided even in the exploration when noisy random inputs are applied to the system. A learning-enabled barrier-certified safe controllers for systems that operate in a shared and uncertain environment is then presented. A safety-aware loss function is defined and minimized to learn the uncertain and unknown behavior of external agents that affect the safety of the system. The loss function is defined based on safe set error, instead of the system model error, and is minimized for both current samples as well as past samples stored in the memory to assure a fast and generalizable learning algorithm for approximating the safe set. The proposed model learning and CBF are then integrated together to form a learning-enabled zeroing CBF (L-ZCBF), which employs the approximated trajectory information of the external agents provided by the learned model but shrinks the safety boundary in case of an imminent safety violation using instantaneous sensory observations. It is shown that the proposed L-ZCBF assures the safety guarantees during learning and even in the face of inaccurate or simplified approximation of external agents, which is crucial in highly interactive environments. Finally, the cooperative capability of agents in a multi-agent environment is investigated for the sake of safety guarantee. CBFs and information-gap theory are integrated to have robust safe controllers for multi-agent systems with different levels of measurement accuracy. A cooperative framework for the construction of CBFs for every two agents is employed to maximize the horizon of uncertainty under which the safety of the overall system is satisfied. The information-gap theory is leveraged to determine the contribution and share of each agent in the construction of CBFs. This results in the highest possible robustness against measurement uncertainty. By employing the proposed approach in constructing CBF, a higher horizon of uncertainty can be safely tolerated and even the failure of one agent in gathering accurate local data can be compensated by cooperation between agents. The effectiveness of the proposed methods is extensively examined in simulation results.
Show less
- Title
- Robust signal processing methods for miniature acoustic sensing, separation, and recognition
- Creator
- Fazel, Amin
- Date
- 2012
- Collection
- Electronic Theses & Dissertations
- Description
-
One of several emerging areas where micro-scale integration promises significant breakthroughs is in the field of acoustic sensing. However, separation, localization, and recognition of acoustic sources using micro-scale microphone arrays poses a significant challenge due to fundamental limitations imposed by the physics of sound propagation. The smaller the distance between the recording elements, the more difficult it is to measure localization and separation cues and hence it is more...
Show moreOne of several emerging areas where micro-scale integration promises significant breakthroughs is in the field of acoustic sensing. However, separation, localization, and recognition of acoustic sources using micro-scale microphone arrays poses a significant challenge due to fundamental limitations imposed by the physics of sound propagation. The smaller the distance between the recording elements, the more difficult it is to measure localization and separation cues and hence it is more difficult to recognize the acoustic sources of interest. The objective of this research is to investigate signal processing and machine learning techniques that can be used for noise-robust acoustic target recognition using miniature microphone arrays.The first part of this research focuses on designing "smart" analog-to-digital conversion (ADC) algorithms that can enhance acoustic cues in sub-wavelength microphone arrays. Many source separation algorithms fail to deliver robust performance when applied to signals recorded using high-density sensor arrays where the distance between sensor elements is much less than the wavelength of the signals. This can be attributed to limited dynamic range (determined by analog-to-digital conversion) of the sensor which is insufficientto overcome the artifacts due to large cross-channel redundancy, non-homogeneous mixing and high-dimensionality of the signal space. We propose a novel framework that overcomes these limitations by integrating statistical learning directly with the signal measurement (analog-to-digital) process which enables high fidelity separation of linear instantaneous mixture. At the core of the proposed ADC approach is a min-max optimization of a regularized objective function that yields a sequence of quantized parameters which asymptotically tracks the statistics of the input signal. Experiments with synthetic and real recordings demonstrate consistent performance improvements when the proposed approach is used as the analog-to-digital front-end to conventional source separation algorithms.The second part of this research focuses on investigating a novel speech feature extraction algorithm that can recognize auditory targets (keywords and speakers) using noisy recordings. The features known as Sparse Auditory Reproducing Kernel (SPARK) coefficients are extracted under the hypothesis that the noise-robust information in speech signal is embedded in a subspace spanned by sparse, regularized, over-complete, non-linear, and phase-shifted gammatone basis functions. The feature extraction algorithm involves computing kernel functions between the speech data and pre-computed set of phased-shifted gammatone functions, followed by a simple pooling technique ("MAX" operation). In this work, we present experimental results for a hidden Markov model (HMM) based speech recognition system whose performance has been evaluated on a standard AURORA 2 dataset. The results demonstrate that the SPARK features deliver significant and consistent improvements in recognition accuracy over the standard ETSI STQ WI007 DSR benchmark features. We have also verified the noise-robustness of the SPARK features for the task of speaker verification. Experimental results based on the NIST SRE 2003 dataset show significant improvements when compared to a standard Mel-frequency cepstral coefficients (MFCCs) based benchmark.
Show less
- Title
- Reducing the number of ultrasound array elements with the matrix pencil method
- Creator
- Sales, Kirk L.
- Date
- 2012
- Collection
- Electronic Theses & Dissertations
- Description
-
Phased arrays are diversely applied with some specific areas including biomedical imaging and therapy, non-destructive testing, radar and sonar. In this thesis, the matrix pencil method is employed to reduce the number of elements in a linear ultrasound phased array. The non-iterative, linear method begins with a specified pressure beam pattern, reduces the dimensionality of the problem, then calculates the element locations and apodization of a reduced array. Computer simulations demonstrate...
Show morePhased arrays are diversely applied with some specific areas including biomedical imaging and therapy, non-destructive testing, radar and sonar. In this thesis, the matrix pencil method is employed to reduce the number of elements in a linear ultrasound phased array. The non-iterative, linear method begins with a specified pressure beam pattern, reduces the dimensionality of the problem, then calculates the element locations and apodization of a reduced array. Computer simulations demonstrate a close comparison between the initial array beam pattern and the reduced array beam pattern for four different linear arrays. The number of elements in a broadside-steered linear array is shown to decrease by approximately 50% with the reduced array beam pattern closely approximating the initial array beam pattern in the far-field. While the method returns a slightly tapered spacing between elements, for the arrays considered, replacing the tapered spacing with a suitably-selected uniform spacing provides very little change in the main beam and low-angle side lobes.
Show less
- Title
- Privacy and integrity preserving computation in distributed systems
- Creator
- Chen, Fei
- Date
- 2011
- Collection
- Electronic Theses & Dissertations
- Description
-
Preserving privacy and integrity of private data has become core requirements for many distributed systems across different parties. In these systems, one party may try to compute or aggregate useful information from the private data of other parties. However, this party is not be fully trusted by other parties. Therefore, it is important to design security protocols for preserving such private data. Furthermore, one party may want to query the useful information computed from such private...
Show morePreserving privacy and integrity of private data has become core requirements for many distributed systems across different parties. In these systems, one party may try to compute or aggregate useful information from the private data of other parties. However, this party is not be fully trusted by other parties. Therefore, it is important to design security protocols for preserving such private data. Furthermore, one party may want to query the useful information computed from such private data. However, query results may be modified by a malicious party. Thus, it is important to design query protocols such that query result integrity can be verified.In this dissertation, we study four important privacy and integrity preserving problems for different distributed systems. For two-tiered sensor networks, where storage nodes serve as an intermediate tier between sensors and a sink for storing data and processing queries, we proposed SafeQ, a protocol that prevents compromised storage nodes from gaining information from both sensor collected data and sink issued queries, while it still allows storage nodes to process queries over encrypted data and the sink to detect compromised storage nodes when they misbehave. For cloud computing, where a cloud provider hosts the data of an organization and replies query results to the customers of the organization, we propose novel privacy and integrity preserving schemes for multi-dimensional range queries such that the cloud provider can process encoded queries over encoded data without knowing the actual values, and customers can verify the integrity of query results with high probability. For distributed firewall policies, we proposed the first privacy-preserving protocol for cross-domain firewall policy optimization. For any two adjacent firewalls belonging to two different administrative domains, our protocol can identify in each firewall the rules that can be removed because of the other firewall. For network reachability, one of the key factors for capturing end-to-end network behavior and detecting the violation of security policies, we proposed the first cross-domain privacy-preserving protocol for quantifying network reachability.
Show less
- Title
- Nanoengineered tissue scaffolds for regenerative medicine in neural cell systems
- Creator
- Tiryaki, Volkan Mujdat
- Date
- 2013
- Collection
- Electronic Theses & Dissertations
- Description
-
Central nervous system (CNS) injuries present one of the most challenging problems. Regeneration in the mammal CNS is often limited because the injured axons cannot regenerate beyond the lesion. Implantation of a scaffolding material is one of the possible approaches to this problem. Recent implantations by our collaborative research group using electrospun polyamide nanofibrillar scaffolds have shown promising results in vitro and in vivo. The physical properties of the tissue scaffolds have...
Show moreCentral nervous system (CNS) injuries present one of the most challenging problems. Regeneration in the mammal CNS is often limited because the injured axons cannot regenerate beyond the lesion. Implantation of a scaffolding material is one of the possible approaches to this problem. Recent implantations by our collaborative research group using electrospun polyamide nanofibrillar scaffolds have shown promising results in vitro and in vivo. The physical properties of the tissue scaffolds have been neglected for many years, and it has only recently been recognized that significant aspects include nanophysical properties such as nanopatterning, surface roughness, local elasticity, surface polarity, surface charge, and growth factor presentation as well as the better-known biochemical cues.The properties of: surface polarity, surface roughness, local elasticity and local work of adhesion were investigated in this thesis. The physical and nanophysical properties of the cell culture environments were evaluated using contact angle and atomic force microscopy (AFM) measurements. A new capability, scanning probe recognition microscopy (SPRM), was also used to characterize the surface roughness of nanofibrillar scaffolds. The corresponding morphological and protein expression responses of rat model cerebral cortical astrocytes to the polyamide nanofibrillar scaffolds versus comparative culture surfaces were investigated by AFM and immunocytochemistry. Astrocyte morphological responses were imaged using AFM and phalloidin staining for F-actin. Activation of the corresponding Rho GTPase regulators was investigated using immunolabeling with Cdc42, Rac1, and RhoA. The results supported the hypothesis that the extracellular environment can trigger preferential activation of members of the Rho GTPase family, with demonstrable morphological consequences for cerebral cortical astrocytes. Astrocytes have a special role in the formation of the glial scar in response to traumatic injury. The glial scar biomechanically and biochemically blocks axon regeneration, resulting in paralysis. Astrocytes involved in glial scar formation become reactive, with development of specific morphologies and inhibitory protein expressions. Dibutyryl cyclic adenosine monophosphate (dBcAMP) was used to induce astrocyte reactivity. The directive importance of nanophysical properties for the morphological and protein expression responses of dBcAMP-stimulated cerebral cortical astrocytes was investigated by immunocytochemistry, Western blotting, and AFM. Nanofibrillar scaffold properties were shown to reduce immunoreactivity responses, while PLL Aclar properties were shown to induce responses reminiscent of glial scar formation. Comparison of the responses for dBcAMP-treated reactive-like and untreated astrocytes indicated that the most influential directive nanophysical cues may differ in wound-healing versus untreated situations.Finally, a new cell shape index (CSI) analysis system was developed using volumetric AFM height images of cells cultured on different substrates. The new CSI revealed quantitative cell spreading information not included in the conventional CSI. The system includes a floating feature selection algorithm for cell segmentation that uses a total of 28 different textural features derived from two models: the gray level co-occurance matrix and local statistics texture features. The quantitative morphometry of untreated and dBcAMP-treated cerebral cortical astrocytes was investigated using the new and conventional CSI, and the results showed that quantitative astrocyte spreading and stellation behavior was induced by variations in nanophysical properties.
Show less
- Title
- MEASURING AND MODELING THE EFFECTS OF SEA LEVEL RISE ON NEAR-COASTAL RIVERINE REGIONS : A GEOSPATIAL COMPARISON OF THE SHATT AL-ARAB RIVER IN SOUTHERN IRAQ WITH THE MISSISSIPPI RIVER DELTA IN SOUTHERN LOUISIANA, USA.
- Creator
- Kadhim, Ameen Awad
- Date
- 2018
- Collection
- Electronic Theses & Dissertations
- Description
-
There is a growing debate among scientists on how sea level rise (SLR) will impact coastal environments, particularly in countries where economic activities are sustained along these coasts. An important factor in this debate is how best to characterize coastal environmental impacts over time. This study investigates the measurement and modeling of SLR and effects on near-coastal riverine regions. The study uses a variety of data sources, including satellite imagery from 1975 to 2017, digital...
Show moreThere is a growing debate among scientists on how sea level rise (SLR) will impact coastal environments, particularly in countries where economic activities are sustained along these coasts. An important factor in this debate is how best to characterize coastal environmental impacts over time. This study investigates the measurement and modeling of SLR and effects on near-coastal riverine regions. The study uses a variety of data sources, including satellite imagery from 1975 to 2017, digital elevation data and previous studies. This research is focusing on two of these important regions: southern Iraq along the Shatt Al-Arab River (SAR) and the southern United States in Louisiana along the Mississippi River Delta (MRD). These sites are important for both their extensive low-lying land and for their significant coastal economic activities. The dissertation consists of six chapters. Chapter one introduces the topic. Chapter two compares and contrasts bothregions and evaluates escalating SLR risk. Chapter three develops a coupled human and natural system (CHANS) perspective for SARR to reveal multiple sources of environmental degradation in this region. Alfa century ago SARR was an important and productive region in Iraq that produced fruits like dates, crops, vegetables, and fish. By 1975 the environment of this region began to deteriorate, and since then, it is well-documented that SARR has suffered under human and natural problems. In this chapter, I use the CHANS perspective to identify the problems, and which ones (human or natural systems) are especially responsible for environmental degradation in SARR. I use several measures of ecological, economic, and social systems to outline the problems identified through the CHANS framework. SARR has experienced extreme weather changes from 1975 to 2017 resulting in lower precipitation (-17mm) and humidity (-5.6%), higher temperatures (1.6 C), and sea level rise, which are affecting the salinity of groundwater and Shatt Al Arab river water. At the same time, human systems in SARR experienced many problems including eight years of war between Iraq and Iran, the first Gulf War, UN Security Council imposed sanctions against Iraq, and the second Gulf War. I modeled and analyzed the regions land cover between 1975 and 2017 to understand how the environment has been affected, and found that climate change is responsible for what happened in this region based on other factors. Chapter four constructs and applies an error propagation model to elevation data in the Mississippi River Delta region (MRDR). This modeling both reduces and accounts for the effects of digital elevation model (DEM) error on a bathtub inundation model used to predict the SLR risk in the region. Digital elevation data is essential to estimate coastal vulnerability to flooding due to sea level rise. Shuttle Radar Topography Mission (SRTM) 1 Arc-Second Global is considered the best free global digital elevation data available. However, inundation estimates from SRTM are subject to uncertainty due to inaccuracies in the elevation data. Small systematic errors in low, flat areas can generate large errors in inundation models, and SRTM is subject to positive bias in the presence of vegetation canopy, such as along channels and within marshes. In this study, I conduct an error assessment and develop statistical error modeling for SRTM to improve the quality of elevation data in these at-risk regions. Chapter five applies MRDR-based model from chapter four to enhance the SRTM 1 Arc-Second Global DEM data in SARR. As such, it is the first study to account for data uncertainty in the evaluation of SLR risk in this sensitive region. This study transfers an error propagation model from MRDR to the Shatt al-Arab river region to understand the impact of DEM error on an inundation model in this sensitive region. The error propagation model involves three stages. First, a multiple regression model, parameterized from MRDR, is used to generate an expected DEM error surface for SARR. This surface is subtracted from the SRTM DEM for SARR to adjust it. Second, residuals from this model are simulated for SARR: these are mean-zero and spatially autocorrelated with a Gaussian covariance model matching that observed in MRDR by convolution filtering of random noise. More than 50 realizations of error were simulated to make sure a stable result was realized. These realizations were subtracted from the adjusted SRTM to produce DEM realizations capturing potential variation. Third, the DEM realizations are each used in bathtub modeling to estimate flooding area in the region with 1 m of sea level rise. The distribution of flooding estimates shows the impact of DEM error on uncertainty in inundation likelihood, and on the magnitude of total flooding. Using the adjusted DEM realizations 47 ± 2 percent of the region is predicted to flood, while using the raw SRTM DEM only 28% of the region is predicted to flood.
Show less
- Title
- LIDAR AND CAMERA CALIBRATION USING A MOUNTED SPHERE
- Creator
- Li, Jiajia
- Date
- 2020
- Collection
- Electronic Theses & Dissertations
- Description
-
Extrinsic calibration between lidar and camera sensors is needed for multi-modal sensor data fusion. However, obtaining precise extrinsic calibration can be tedious, computationally expensive, or involve elaborate apparatus. This thesis proposes a simple, fast, and robust method performing extrinsic calibration between a camera and lidar. The only required calibration target is a hand-held colored sphere mounted on a whiteboard. The convolutional neural networks are developed to automatically...
Show moreExtrinsic calibration between lidar and camera sensors is needed for multi-modal sensor data fusion. However, obtaining precise extrinsic calibration can be tedious, computationally expensive, or involve elaborate apparatus. This thesis proposes a simple, fast, and robust method performing extrinsic calibration between a camera and lidar. The only required calibration target is a hand-held colored sphere mounted on a whiteboard. The convolutional neural networks are developed to automatically localize the sphere relative to the camera and the lidar. Then using the localization covariance models, the relative pose between the camera and lidar is derived. To evaluate the accuracy of our method, we record image and lidar data of a sphere at a set of known grid positions by using two rails mounted on a wall. The accurate calibration results are demonstrated by projecting the grid centers into the camera image plane and finding the error between these points and the hand-labeled sphere centers.
Show less
- Title
- Kernel methods for biosensing applications
- Creator
- Khan, Hassan Aqeel
- Date
- 2015
- Collection
- Electronic Theses & Dissertations
- Description
-
This thesis examines the design noise robust information retrieval techniques basedon kernel methods. Algorithms are presented for two biosensing applications: (1)High throughput protein arrays and (2) Non-invasive respiratory signal estimation.Our primary objective in protein array design is to maximize the throughput byenabling detection of an extremely large number of protein targets while using aminimal number of receptor spots. This is accomplished by viewing the proteinarray as a...
Show moreThis thesis examines the design noise robust information retrieval techniques basedon kernel methods. Algorithms are presented for two biosensing applications: (1)High throughput protein arrays and (2) Non-invasive respiratory signal estimation.Our primary objective in protein array design is to maximize the throughput byenabling detection of an extremely large number of protein targets while using aminimal number of receptor spots. This is accomplished by viewing the proteinarray as a communication channel and evaluating its information transmission capacity as a function of its receptor probes. In this framework, the channel capacitycan be used as a tool to optimize probe design; the optimal probes being the onesthat maximize capacity. The information capacity is first evaluated for a small scaleprotein array, with only a few protein targets. We believe this is the first effort toevaluate the capacity of a protein array channel. For this purpose models of theproteomic channel's noise characteristics and receptor non-idealities, based on experimental prototypes, are constructed. Kernel methods are employed to extend thecapacity evaluation to larger sized protein arrays that can potentially have thousandsof distinct protein targets. A specially designed kernel which we call the ProteomicKernel is also proposed. This kernel incorporates knowledge about the biophysicsof target and receptor interactions into the cost function employed for evaluation of channel capacity.For respiratory estimation this thesis investigates estimation of breathing-rateand lung-volume using multiple non-invasive sensors under motion artifact and highnoise conditions. A spirometer signal is used as the gold standard for evaluation oferrors. A novel algorithm called the segregated envelope and carrier (SEC) estimation is proposed. This algorithm approximates the spirometer signal by an amplitudemodulated signal and segregates the estimation of the frequency and amplitude in-formation. Results demonstrate that this approach enables effective estimation ofboth breathing rate and lung volume. An adaptive algorithm based on a combination of Gini kernel machines and wavelet filltering is also proposed. This algorithm is titledthe wavelet-adaptive Gini (or WAGini) algorithm, it employs a novel wavelet trans-form based feature extraction frontend to classify the subject's underlying respiratorystate. This information is then employed to select the parameters of the adaptive kernel machine based on the subject's respiratory state. Results demonstrate significantimprovement in breathing rate estimation when compared to traditional respiratoryestimation techniques.
Show less
- Title
- Implantable VLSI systems for compression and communication in wireless biosensor recording arrays
- Creator
- Kamboh, Awais Mehmood
- Date
- 2010
- Collection
- Electronic Theses & Dissertations
- Description
-
Successful use of microelectrode arrays to record neural activity in the cortex has opened new opportunities for scientists to decode the intricate functionality of the human brain and the behavior of neurons that enable its complex operation. The resulting brain-machine interface devices play a critical role in enabling patients with neural disorders to achieve a better lifestyle. Such interfaces provide a direct interface to the brain and show great promise in many biomedical applications...
Show moreSuccessful use of microelectrode arrays to record neural activity in the cortex has opened new opportunities for scientists to decode the intricate functionality of the human brain and the behavior of neurons that enable its complex operation. The resulting brain-machine interface devices play a critical role in enabling patients with neural disorders to achieve a better lifestyle. Such interfaces provide a direct interface to the brain and show great promise in many biomedical applications.This thesis explores some of the major obstacles impeding the advance of wireless neural implants and addresses them through development of highly efficient algorithms and implantable hardware. An overwhelming amount of data is generated by the microelectrode arrays, resulting in a data bandwidth bottleneck. To overcome this problem, an implantable system has been devised to enable control over the amount of data that must be transmitted without compromising the information contained in the array of neural signals. Furthermore, the nature of the wireless communication channel across the skin tissue is not well characterized. In this thesis, solutions have been developed to maximize that data throughput and enable unfailing yet low-power communication of bidirectional data between the implanted device and the external world. Finally, a unified energy-efficient, implantable CMOS integrated circuit was developed to address these two critical problems. The resulting integrated solution ensures seamless multi-modal operation, and thus establishes a pathway to the design of next-generation neuroprosthetics devices. Although the motivation for this thesis comes from the field of neuroprosthetics, the solutions devised are pertinent to a wide range of implantable applications.
Show less
- Title
- Higher-order data reduction through clustering, subspace analysis and compression for applications in functional connectivity brain networks
- Creator
- Ozdemir, Alp
- Date
- 2017
- Collection
- Electronic Theses & Dissertations
- Description
-
"With the recent advances in information technology, collection and storage of higher-order datasets such as multidimensional data across multiple modalities or variables have become much easier and cheaper than ever before. Tensors, also known as multiway arrays, provide natural representations for higher-order datasets and provide a way to analyze them by preserving the multilinear relations in these large datasets. These higher-order datasets usually contain large amount of redundant...
Show more"With the recent advances in information technology, collection and storage of higher-order datasets such as multidimensional data across multiple modalities or variables have become much easier and cheaper than ever before. Tensors, also known as multiway arrays, provide natural representations for higher-order datasets and provide a way to analyze them by preserving the multilinear relations in these large datasets. These higher-order datasets usually contain large amount of redundant information and summarizing them in a succinct manner is essential for better inference. However, existing data reduction approaches are limited to vector-type data and cannot be applied directly to tensors without vectorizing. Developing more advanced approaches to analyze tensors effectively without corrupting their intrinsic structure is an important challenge facing Big Data applications. This thesis addresses the issue of data reduction for tensors with a particular focus on providing a better understanding of dynamic functional connectivity networks (dFCNs) of the brain. Functional connectivity describes the relationship between spatially separated neuronal groups and analysis of dFCNs plays a key role for interpreting complex brain dynamics in different cognitive and emotional processes. Recently, graph theoretic methods have been used to characterize the brain functionality where bivariate relationships between neuronal populations are represented as graphs or networks. In this thesis, the changes in these networks across time and subjects will be studied through tensor representations. In Chapter 2, we address a multi-graph clustering problem which can be thought as a tensor partitioning problem. We introduce a hierarchical consensus spectral clustering approach to identify the community structure underlying the functional connectivity brain networks across subjects. New information-theoretic criteria are introduced for selecting the optimal community structure. Effectiveness of the proposed algorithms are evaluated through a set of simulations comparing with the existing methods as well as on FCNs across subjects. In Chapter 3, we address the online tensor data reduction problem through a subspace tracking perspective. We introduce a robust low-rank+sparse structure learning algorithm for tensors to separate the low-rank community structure of connectivity networks from sparse outliers. The proposed framework is used to both identify change points, where the low-rank community structure changes significantly, and summarize this community structure within each time interval. Finally, in Chapter 4, we introduce a new multi-scale tensor decomposition technique to efficiently encode nonlinearities due to rotation or translation in tensor type data. In particular, we develop a multi-scale higher-order singular value decomposition (MS-HoSVD) approach where a given tensor is first permuted and then partitioned into several sub-tensors each of which can be represented as a low-rank tensor increasing the efficiency of the representation. We derive a theoretical error bound for the proposed approach as well as provide analysis of memory cost and computational complexity. Performance of the proposed approach is evaluated on both data reduction and classification of various higher-order datasets."--Pages ii-iii.
Show less
- Title
- High-dimensional learning from random projections of data through regularization and diversification
- Creator
- Aghagolzadeh, Mohammad
- Date
- 2015
- Collection
- Electronic Theses & Dissertations
- Description
-
Random signal measurement, in the form of random projections of signal vectors, extends the traditional point-wise and periodic schemes for signal sampling. In particular, the well-known problem of sensing sparse signals from linear measurements, also known as Compressed Sensing (CS), has promoted the utility of random projections. Meanwhile, many signal processing and learning problems that involve parametric estimation do not consist of sparsity constraints in their original forms. With the...
Show moreRandom signal measurement, in the form of random projections of signal vectors, extends the traditional point-wise and periodic schemes for signal sampling. In particular, the well-known problem of sensing sparse signals from linear measurements, also known as Compressed Sensing (CS), has promoted the utility of random projections. Meanwhile, many signal processing and learning problems that involve parametric estimation do not consist of sparsity constraints in their original forms. With the increasing popularity of random measurements, it is crucial to study the generic estimation performance under the random measurement model. In this thesis, we consider two specific learning problems (named below) and present the following two generic approaches for improving the estimation accuracy: 1) by adding relevant constraints to the parameter vectors and 2) by diversification of the random measurements to achieve fast decaying tail bounds for the empirical risk function.The first problem we consider is Dictionary Learning (DL). Dictionaries are extensions of vector bases that are specifically tailored for sparse signal representation. DL has become increasingly popular for sparse modeling of natural images as well as sound and biological signals, just to name a few. Empirical studies have shown that typical DL algorithms for imaging applications are relatively robust with respect to missing pixels in the training data. However, DL from random projections of data corresponds to an ill-posed problem and is not well-studied. Existing efforts are limited to learning structured dictionaries or dictionaries for structured sparse representations to make the problem tractable. The main motivation for considering this problem is to generate an adaptive framework for CS of signals that are not sparse in the signal domain. In fact, this problem has been referred to as 'blind CS' since the optimal basis is subject to estimation during CS recovery. Our initial approach, similar to some of the existing efforts, involves adding structural constraints on the dictionary to incorporate sparse and autoregressive models. More importantly, our results and analysis reveal that DL from random projections of data, in its unconstrained form, can still be accurate given that measurements satisfy the diversity constraints defined later.The second problem that we consider is high-dimensional signal classification. Prior efforts have shown that projecting high-dimensional and redundant signal vectors onto random low-dimensional subspaces presents an efficient alternative to traditional feature extraction tools such as the principle component analysis. Hence, aside from the CS application, random measurements present an efficient sampling method for learning classifiers, eliminating the need for recording and processing high-dimensional signals while most of the recorded data is discarded during feature extraction. We work with the Support Vector Machine (SVM) classifiers that are learned in the high-dimensional ambient signal space using random projections of the training data. Our results indicate that the classifier accuracy can be significantly improved by diversification of the random measurements.
Show less
- Title
- Harnessing low-pass filter defects for improving wireless link performance : measurements and applications
- Creator
- Renani, Alireza Ameli
- Date
- 2018
- Collection
- Electronic Theses & Dissertations
- Description
-
"The design trade-offs of transceiver hardware are crucial to the performance of wireless systems. The effect of such trade-offs on individual analog and digital components are vigorously studied, but their systemic impacts beyond component-level remain largely unexplored. In this dissertation, we present an in-depth study to characterize the surprisingly notable systemic impacts of low-pass filter design, which is a small yet indispensable component used for shaping spectrum and rejecting...
Show more"The design trade-offs of transceiver hardware are crucial to the performance of wireless systems. The effect of such trade-offs on individual analog and digital components are vigorously studied, but their systemic impacts beyond component-level remain largely unexplored. In this dissertation, we present an in-depth study to characterize the surprisingly notable systemic impacts of low-pass filter design, which is a small yet indispensable component used for shaping spectrum and rejecting interference. Using a bottom-up approach, we examine how signal-level distortions caused by the trade-offs of low-pass filter design propagate to the upper-layers of wireless communication, reshaping bit error patterns and degrading link performance of today's 802.11 systems. Moreover, we propose a novel unequal error protection algorithm that harnesses low-pass filter defects for improving wireless LAN throughput, particularly to be used in forward error correction, channel coding, and applications such as video streaming. Lastly, we conduct experiments to evaluate the unequal error protection algorithm in video streaming, and we present substantial enhancements of video quality in mobile environments."--Page ii.
Show less