You are here
Search results
(1 - 20 of 229)
Pages
- Title
- Toward zero delay video streaming
- Creator
- Al-Qassab, Hothaifa Tariq
- Date
- 2018
- Collection
- Electronic Theses & Dissertations
- Description
-
Video streaming has been growing rapidly since the beginning of this century and it is expected to continue growing. With rapid growth of Internet traffic led by video traffic, the Internet busy hours on both mobile and fixed connection segments will double before the end of this decade. Meanwhile, transmission delay is a well-known problem in video streaming and it has been addressed by many prior works that demonstrated the feasibility of reducing packet delays over the Internet by...
Show moreVideo streaming has been growing rapidly since the beginning of this century and it is expected to continue growing. With rapid growth of Internet traffic led by video traffic, the Internet busy hours on both mobile and fixed connection segments will double before the end of this decade. Meanwhile, transmission delay is a well-known problem in video streaming and it has been addressed by many prior works that demonstrated the feasibility of reducing packet delays over the Internet by employing a variety of end-to-end techniques. This thesis consists of two parts that introduce new video streaming frameworks over the Internet and over connected-vehicle networks, respectively. Our objective in the first part of this thesis is to improve video streaming over the Internet. The emerging of new technology such as the HTTP-based Adaptive Streaming (HAS) approach has emerged as the dominant framework for video streaming mainly due to its simplicity, firewall friendliness, and ease of deployment. However, recent studies have shown that HAS solutions suffer from major shortcomings, including unfairness, significant bitrate oscillation under different conditions and significant delay. On the other hand, Quality-of-Service (QoS) based mechanisms, most notably multi-priority queue mechanisms such as DiffServ, can provide optimal video experience but at a major cost in complexity within the network. Our objective in this thesis is to design an efficient, low complexity and low delay video streaming framework.We call our proposed Internet streaming framework Erasable Packets within Internet Queues (EPIQ). Our proposed solution is based on a novel packetization of the video content in a way that exploits the inherent multi-priority nature of video. An important notion of our proposed framework is Partially Erasable Packet (PEP) that has two key attributes: (1) Each PEP packet carries multiple segments corresponding to multiple priority levels of the video content; and (2) High priority segments are placed next to the packet header while low-priority segments are placed toward the tail of the PEP packet. Furthermore, to evaluate our framework performance, we developed an analytical model for EPIQ that shows significant improvements when compared to the conventional and multi-priority queue video transmission models. Our proposed solution consists of a new Active Queue Management (AQM) that is similar to the RED algorithm. Under congestion, a best-effort AQM router can simply erase an arbitrary portion of a PEP packet starting from its tail where we denote this process as Partial Erasing (PE). To complement partial erasing in the AQM, a rate control protocol similar to TFRC is proposed to ensure fairness for video and non-video traffic. We demonstrate the viability of the proposed framework by simulating High Definition (HD) Video on Demand (VoD) streaming on the popular network simulator ns-2. Our results show that EPIQ provides improvements in video quality in terms of PSNR by at least 3dB over traditional video streaming formworks. In addition, packet loss ratio and delay jitter performance are comparable to the optimal video streaming mechanism that is offered by multi-priority systems such as DiffServ.The main objective of the second part of the thesis is to develop a vehicle active safety framework that utilizes video streaming and vehicle-to-vehicle (V2V) communication for driver warning. Most prior efforts for V2V safety applications have been limited to sharing vehicle status data between connected vehicles. On the other hand, video streaming has been mainly proposed for video contents sharing between vehicles or dashboard camera sharing.We propose a Cooperative Advanced Driver Assistance System (C-ADAS) where vehicles share visual information and fuse it with local visuals to improve the performance of driver assistance systems. In our proposed system, vehicles share detected objects (e.g., pedestrians, vehicles, cyclists, etc.) and important camera data using the DSRC technology. The vehicle receiving the data from an adjacent vehicle can then fuse the received visual data with its own camera views to create a much richer visual scene. The sharing of data is motivated by the fact that some critical visual views captured by one vehicle are not visible or captured by many other vehicles in the same environment. Sharing such data in real-time provides an invaluable new level of awareness that can significantly enhance a driver-assistance, connected vehicle, and/or autonomous vehicle’s safety-system. The experimental results showed that our proposed system performed as intended and was able to warn drivers ahead of time, and consequently, it could mitigate major accidents and safe lives.
Show less
- Title
- Theoretical analysis of electronic, thermal, and mechanical properties in gallium oxide
- Creator
- Domenico Santia, Marco
- Date
- 2019
- Collection
- Electronic Theses & Dissertations
- Description
-
In recent years, Ga2O3 has proven to be a promising semiconductor candidate for a widearray of power electronics and optoelectronics devices due to its wide bandgap, high breakdownvoltage, and growth potential. However, the material suffers from a very low thermalconductivity and subsequent self-heating issues. Additionally, the complexity of the crystalstructure coupled with the lack of empirical data, has restricted the predictive power of modellingmaterial properties using traditional...
Show moreIn recent years, Ga2O3 has proven to be a promising semiconductor candidate for a widearray of power electronics and optoelectronics devices due to its wide bandgap, high breakdownvoltage, and growth potential. However, the material suffers from a very low thermalconductivity and subsequent self-heating issues. Additionally, the complexity of the crystalstructure coupled with the lack of empirical data, has restricted the predictive power of modellingmaterial properties using traditional methods. The objective of this dissertation is toprovide a detailed theoretical characterization of material properties in the wide bandgapsemiconductor Ga2O3 using first-principles methods requiring no empirical inputs. Latticethermal conductivity of bulk β − Ga2O3 is predicted using a combination of first-principlesdetermined harmonic and anharmonic force constants within a Boltzmann transport formalismthat reveal a distinct anisotropy and strong contribution to thermal conduction fromoptical phonon modes. Additionally, the quasiharmonic approximation is utilized to estimatevolumetric effects such as the anisotropic thermal expansion.To evaluate the efficacy of heat removal from β − Ga2O3 material, the thermal boundaryconductance is computed within a variance-reduced Monte-Carlo framework utilizingfirst-principles determined phonon-phonon scattering rates for layered structures containingchromium or titanium as an adhesive layer between a β − Ga2O3 substrate and Au contact.The effect of the adhesive layer improves the overall thermal boundary conductancesignificantly with the maximum value found using a 5 nm layer of chromium, exceeding themore traditional titanium adhesive layers by a factor of 2. This indicates the potential ofheatsink-based thermal management as an effective solution to the self-heating issue.Additionally, this dissertation provides a detailed characterization of the effect of strainon fundamental material properties of β−Ga2O3 . Due to the highly anisotropic nature of thecrystal, the effect strain can have on electronic, mechanical, and optical properties is largelyunknown. Using the quasi-static formalism within a DFT framework and the stress-strainapproach, the effect of strain can be evaluated and combined with the anisotropic thermalexpansion to incorporate an accurate temperature dependence. It is found that the elasticstiffness constants do not vary significantly with temperature. The computed anisotropyis unique and differs significantly from similar monoclinic crystal structures, indicating theimportant role of the polyhedral linkage to the reported anisotropy in material properties.Lastly, the dependence of the dielectric function with respect to strain is evaluated using amodified stress-strain approach. This elasto-optic, or photoelastic, effect is found to be significantfor sheared crystal configurations. This opens up a potential unexplored applicationspace for Ga2O3 as an acousto-optic modulation device
Show less
- Title
- Natural language based control and programming of robotic behaviors
- Creator
- Cheng, Yu (Graduate of Michigan State University)
- Date
- 2019
- Collection
- Electronic Theses & Dissertations
- Description
-
"Robots have been transforming our daily lives by moving from controlled industrial lines to unstructured and dynamic environments such as home, offices, or outdoors working closely with human co-workers. Accordingly, there is an emerging and urgent need for human users to communicate with robots through natural language (NL) due to its convenience and expressibility, especially for the technically untrained people. Nevertheless, two fundamental problems remain unsolved for robots to working...
Show more"Robots have been transforming our daily lives by moving from controlled industrial lines to unstructured and dynamic environments such as home, offices, or outdoors working closely with human co-workers. Accordingly, there is an emerging and urgent need for human users to communicate with robots through natural language (NL) due to its convenience and expressibility, especially for the technically untrained people. Nevertheless, two fundamental problems remain unsolved for robots to working in such environments. On one hand, how to control robot behaviors in dynamic environments due to presence of people is still a daunting task. On the other hand, robot skills are usually preprogrammed while an application scenario may require a robot to perform new tasks. How to program a new skill to robots using NL on the fly also requires tremendous efforts. This dissertation tries to tackle these two problems in the framework of supervisory control. On the control aspect, it will be shown ideas drawn from dynamic discrete event systems can be used to model environmental dynamics and guarantee safety and stability of robot behaviors. Specifically, the procedures to build robot behavioral model and the criteria for model property checking will be presented. As there are enormous utterances in language with different abstraction level, a hierarchical framework is proposed to handle tasks lying in different logic depth. Behavior consistency and stability under hierarchy are discussed. On the programming aspect, a novel online programming via NL approach that formulate the problem in state space is presented. This method can be implemented on the fly without terminating the robot implementation. The advantage of such a method is that there is no need to laboriously labeling data for skill training, which is required by traditional offline training methods. In addition, integrated with the developed control framework, the newly programmed skills can also be applied to dynamic environments. In addition to the developed robot control approach that translates language instructions into symbolic representations to guide robot behaviors, a novel approach to transform NL instructions into scene representation is presented for robot behaviors guidance, such as robotic drawing, painting, etc. Instead of using a local object library or direct text-to-pixel mappings, the proposed approach utilizes knowledge retrieved from Internet image search engines, which helps to generate diverse and creative scenes. The proposed approach allows interactive tuning of the synthesized scene via NL. This helps to generate more complex and semantically meaningful scenes, and to correct training errors or bias. The success of robot behavior control and programming relies on correct estimation of task implementation status, which is comprised of robotic status and environmental status. Besides vision information to estimate environmental status, tactile information is heavily used to estimate robotic status. In this dissertation, correlation based approaches have been developed to detect slippage occurrence and slipping velocity, which provide grasp status to the high symbolic level and are used to control grasp force at lower continuous level. The proposed approaches can be used with different sensor signal type and are not limited to customized designs. The proposed NL based robot control and programming approaches in this dissertation can be applied to other robotic applications, and help to pave the way for flexible and safe human-robot collaboration."--Pages ii-iii.
Show less
- Title
- AUTOMATED PET/CT REGISTRATION FOR ACCURATE RECONSTRUCTION OF PET IMAGES
- Creator
- Khurshid, Khawar
- Date
- 2018
- Collection
- Electronic Theses & Dissertations
- Description
-
The use of a CT attenuation correction (CTAC) map for the reconstruction of PET image can introduce attenuation artifacts due to the potential misregistration between the PET and CT data. This misregistration is mainly caused by patient motion and physiological movement of organs during the acquisition of the PET and CT scans. In cardiac exams, the motion of the patient may not be significant but the diaphragm movement during the respiratory cycle can displace the heart by up to 2 cm along...
Show moreThe use of a CT attenuation correction (CTAC) map for the reconstruction of PET image can introduce attenuation artifacts due to the potential misregistration between the PET and CT data. This misregistration is mainly caused by patient motion and physiological movement of organs during the acquisition of the PET and CT scans. In cardiac exams, the motion of the patient may not be significant but the diaphragm movement during the respiratory cycle can displace the heart by up to 2 cm along the long axis of the body. This shift can project the PET heart onto the lungs in the CT image, thereby producing an underestimated value for the attenuation. In brain studies, patients undergoing a PET scan are often not able to follow instructions to keep their head in a still position, resulting in misregistered PET and CT image datasets. The head movement is quite significant in many cases despite the use of head restraints. This misaligns the PET and CT data, thus creating an erroneous CT attenuation correction map. In such cases, bone or air attenuation coefficients may be projected onto the brain which causes an overestimation or an underestimation of the resulting CTAC values. To avoid misregistration artifacts and potential diagnostic misinterpretation, automated software for PET/CT registration has been developed that works for both cardiac and brain datasets. This software segments the PET and CT data, extracts the brain or the heart surface information from both datasets, and compensates for the translational and rotational misalignment between the two scans. The PET data are reconstructed using the aligned CTAC, and the results are analyzed and compared with the original dataset. This procedure has been evaluated on 100 cardiac and brain PET/CT data sets, and the results show that the artifacts due to the misregistration between the two modalities are eliminated after the PET and CT images are aligned.
Show less
- Title
- A container-attachable inertial sensor for real-time hydration tracking
- Creator
- Griffith, Henry
- Date
- 2019
- Collection
- Electronic Theses & Dissertations
- Description
-
The underconsumption of fluid is associated with multiple adverse health outcomes, including reduced cognitive function, obesity, and cancer. To aid individuals in maintaining adequate hydration, numerous sensing architectures for tracking fluid intake have been proposed. Amongst the various approaches considered, container-attachable inertial sensors offer a non-wearable solution capable of estimating aggregate consumption across multiple drinking containers. The research described herein...
Show moreThe underconsumption of fluid is associated with multiple adverse health outcomes, including reduced cognitive function, obesity, and cancer. To aid individuals in maintaining adequate hydration, numerous sensing architectures for tracking fluid intake have been proposed. Amongst the various approaches considered, container-attachable inertial sensors offer a non-wearable solution capable of estimating aggregate consumption across multiple drinking containers. The research described herein demonstrates techniques for improving the performance of these devices.A novel sip detection algorithm designed to accommodate the variable duration and sparse occurrence of drinking events is presented at the beginning of this dissertation. The proposed technique identifies drinks using a two-stage segmentation and classification framework. Segmentation is performed using a dynamic partitioning algorithm which spots the characteristic inclination pattern of the container during drinking. Candidate drinks are then distinguished from handling activities with similar motion patterns using a support vector machine classifier. The algorithm is demonstrated to improve true positive detection rate from 75.1% to 98.8% versus a benchmark approach employing static segmentation. Multiple strategies for improving drink volume estimation performance are demonstrated in the latter portion of this dissertation. Proposed techniques are verified through a large-scale data collection consisting of 1,908 drinks consumed by 84 individuals over 159 trials. Support vector machine regression models are shown to improve per-drink estimation accuracy versus the prior state-of-the-art for a single inertial sensor, with mean absolute percentage error reduced by 11.1%. Aggregate consumption accuracy is also improved versus previously reported results for a container-attachable device.An approach for computing aggregate consumption using fill level estimates is also demonstrated. Fill level estimates are shown to exhibit superior accuracy with reduced inter-subject variance versus volume models. A heuristic fusion technique for further improving these estimates is also introduced herein. Heuristic fusion is shown to reduce root mean square error versus direct estimates by over 30%. The dissertation concludes by demonstrating the ability of the sensor to operate across multiple containers.
Show less
- Title
- DOWNLINK RESOURCE BLOCKS POSITIONING AND SCHEDULING IN LTE SYSTEMS EMPLOYING ADAPTIVE FRAMEWORKS
- Creator
- Abusaid, Osama M.
- Date
- 2018
- Collection
- Electronic Theses & Dissertations
- Description
-
The present expansions in size and complexity of LTE networks is hindering their performance and their reliability. This hindrance is manifested in deteriorating performance in the User Equipment’s throughput and latency as a consequence to deteriorating the E-node B downlink throughput. This is leading to the need of smart E Node Base with various capabilities adapting to the changing communication environment. The proposed work aims at developing Self Organization (SO) techniques and...
Show moreThe present expansions in size and complexity of LTE networks is hindering their performance and their reliability. This hindrance is manifested in deteriorating performance in the User Equipment’s throughput and latency as a consequence to deteriorating the E-node B downlink throughput. This is leading to the need of smart E Node Base with various capabilities adapting to the changing communication environment. The proposed work aims at developing Self Organization (SO) techniques and frameworks for LTE networks at the Resource Blocks (RB) scheduling management level. After reviewing the existing literature on Self Organization techniques and scheduling strategies that have been recently implemented in other wireless networks, we identify several contrasting needs that can jointly be addressed. The deployment of the introduced algorithms in the communication network is expected to lead to improved and upgraded overall network performance. The main feature of the LTE networks family is the feed-back that the cell receives from the users. The feedback includes the down link channel assessment based on the User Equipment (UE) measure Channel Quality Indicator (CQI) in the last Transmission Time Interval (TTI). This feed-back should be the main decision factor in allocating Resource Blocks (RBs) among users. The challenge is how could one maps the users’ data onto the RBs based on the CQI. The Thesis advances two approaches towards that end:- the allocation among the current users for the next TTI should be mapped, consistent with historical feed-back CQI received from users over prior transmission durations. This approach also aims at offering a solution to the bottle-neck capacity issue in the scheduling of LTE networks. To that end, we present an implementation of a modified Self Organizing Map (SOM) algorithm for mapping incoming data into RBs. Such an implementation can handle the collective cell enabling our cell to become smarter. The criteria in measuring the E-node Base performance include throughput, fairness and the trade-off between these attributes.- Another promising and complementary approach is to tailor Recurrent Neural Networks (RNNs) to implement optimal dynamic mappings of the Resource Blocks (RBs) in response to the history sequence of the Channel Quality Indicator CQI feedback. RNNs can successfully build its own internal state over the entire training CQI sequence and consequently make the prediction more viable. With this dynamic mapping technique, the prediction will be more accurate to changing time-varying channel environments. Overall, the collective cell management would become more intelligent and would be adaptable to changing environments. Consequently, a significant performance improvement can be achieved at lower cost. Moreover, a general tunability of the scheduling system becomes possible which would incorporate a trade-off between system complexity and QoS.
Show less
- Title
- Study of nanocomposites and nanowire devices for THz circuit applications
- Creator
- Yang, Xianbo
- Date
- 2014
- Collection
- Electronic Theses & Dissertations
- Description
-
Interest in terahertz (1011-1013Hz) spectral region is driven by the possibility of exploiting unique interaction between electromagnetic fields and materials in this spectral regime. Potentials of THz have been examined using quasi-optical table top systems. There is significant interest to minimize the bench-top quasi-optical systems to integrated circuit level in order to realize similar functions and benefits as in the digital and RF integrated circuit areas. Integration of both passive...
Show moreInterest in terahertz (1011-1013Hz) spectral region is driven by the possibility of exploiting unique interaction between electromagnetic fields and materials in this spectral regime. Potentials of THz have been examined using quasi-optical table top systems. There is significant interest to minimize the bench-top quasi-optical systems to integrated circuit level in order to realize similar functions and benefits as in the digital and RF integrated circuit areas. Integration of both passive and active devices at the wafer level is necessary to meet this challenge. Conventional integration approaches (e.g., microstrip transmission lines) do not directly lend themselves to the design and fabrication of THz circuits and new design approaches are required. This work proposes and demonstrates novel approaches to achieve both active and passive element integration at the wafer level that are compatible with large-area and low-temperature processes, and paves the path to realize highly functional, compact, low-cost THz systems. THz waveguide and interconnects are one of the fundamental building blocks of THz passives. This research investigates the use of thin dielectric ribbons made from polymer-ceramic nanocomposite for the fabrication of planar, low-loss, and large area compatible THz waveguides. Simulations show the ribbon waveguides provide low loss THz wave propagation when a combination of high dielectric constant (high-k) core and low dielectric constant cladding are used. This combination provides stronger field confinement and reduces losses at waveguide bends. Two different fabrication approaches are investigated: photopatterning of tailorable nanocomposite thin films, and laser cutting of dry nanocomposite thin films. Measurements of different waveguide samples validate the simulated results and prove that low cost, wafer-level planar THz integrated circuits can be realized with proposed waveguides. THz active devices are the core elements required to build THz circuits. Diode is a key component that is needed to form a basic THz active circuit. Semiconducting n-type GaAs nanowires are utilized in the fabrication of THz Schottky diodes. Nanowire based devices can be used to achieve high cut-off frequency devices, but individual nanowire has high impedance that is not suitable for wide-band impedance matching. To overcome this challenge, multiple nanowires placed in parallel are integrated together to achieve desired impedance while maintaining high cut-off frequency. A novel low-cost process using photolithography is applied to fabricate sub-micron devices. Fabrication of nanowire based devices is compatible with integration on a host of large area substrates at low processing temperature. These diodes are first utilized in the design of THz detectors, calculated and measured results show strong nonlinear rectification behavior and high sensitivity over a wide frequency band (0.1 - 1 THz). In parallel, an alternative method of fabricating THz detector was also investigated. Active devices are embedded within the dielectric layers forming the waveguides. This avoids the use of flip-chip or wire bonds to connect the devices and thus minimizes the parasitics. GaAs Schottky Barrier Diodes (SBDs) are directly integrated with broadband log-periodic antennas to design a highly sensitive broad-band THz detector. Calculated and measured sensitivity of the detector closely matches the performance of existing commercial THz detectors fabricated using elaborate micromachining techniques. A THz image sensor is fabricated and demonstrated in this work to prove the feasibility of this concept. This fabrication approach is large-area, low-cost, and low-temperature process compatible and can also be implemented in heterogeneous integration of THz devices on a host of substrates.
Show less
- Title
- Nanorobotic end-effectors : design, fabrication, and in situ characterization
- Creator
- Fan, Zheng (Of Michigan State University)
- Date
- 2015
- Collection
- Electronic Theses & Dissertations
- Description
-
Nano-robotic end-effectors have promising applications for nano-fabrication, nano-manufacturing, nano-optics, nano-medical, and nano-sensing; however, low performances of the conventional end-effectors have prevented the widespread utilization of them in various fields. There are two major difficulties in developing the end-effectors: their nano-fabrication and their advanced characterization in the nanoscale. Here we introduce six types of end-effectors: the nanotube fountain pen (NFP), the...
Show moreNano-robotic end-effectors have promising applications for nano-fabrication, nano-manufacturing, nano-optics, nano-medical, and nano-sensing; however, low performances of the conventional end-effectors have prevented the widespread utilization of them in various fields. There are two major difficulties in developing the end-effectors: their nano-fabrication and their advanced characterization in the nanoscale. Here we introduce six types of end-effectors: the nanotube fountain pen (NFP), the super-fine nanoprobe, the metal-filled carbon nanotube (m@CNT)-based sphere-on-pillar (SOP) nanoantennas, the tunneling nanosensor, and the nanowire-based memristor. The investigations on the NFP are focused on nano-fluidics and nano-fabrications. The NFP could direct write metallic "inks" and fabricating complex metal nanostructures from 0D to 3D with a position servo control, which is critically important to future large-scale, high-throughput nanodevice production. With the help of NFP, we could fabricate the end-effectors such as super-fine nanoprobe and m@CNT-based SOP nanoantennas. Those end-effectors are able to detect local flaws or characterize the electrical/mechanical properties of the nanostructure. Moreover, using electron-energy-loss-spectroscopy (EELS) technique during the operation of the SOP optical antenna opens a new basis for the application of nano-robotic end-effectors. The technique allows advanced characterization of the physical changes, such as carrier diffusion, that are directly responsible for the device's properties. As the device was coupled with characterization techniques of scanning-trasmission-electron-microscopy (STEM), the development of tunneling nanosensor advances this field of science into quantum world. Furthermore, the combined STEM-EELS technique plays an important role in our understanding of the memristive switching performance in the nanowire-based memristor. The developments of those nano-robotic end-effectors expend the study abilities in investigating the in situ nanotechnology, providing efficient ways in in situ nanostructure fabrication and the advanced characterization of the nanomaterials.
Show less
- Title
- Condition monitoring and analysis of a permanent magnet synchronous machine drive system
- Creator
- Babel, Andrew Stephen
- Date
- 2014
- Collection
- Electronic Theses & Dissertations
- Description
-
Incipient faults in inverter-driven permanent magnet machine drives can often be detected and their progression monitored by some characteristic parameter in the machine's modeling equations. Some diagnosis and prognosis methods use features not reflected in the machine model, evident when voltage and current frequency harmonics are used. To indicate an inverter fault, a model of the inverter is instead used and its characteristic parameters found to detect parametric changes. Insulation...
Show moreIncipient faults in inverter-driven permanent magnet machine drives can often be detected and their progression monitored by some characteristic parameter in the machine's modeling equations. Some diagnosis and prognosis methods use features not reflected in the machine model, evident when voltage and current frequency harmonics are used. To indicate an inverter fault, a model of the inverter is instead used and its characteristic parameters found to detect parametric changes. Insulation condition is assessed by monitoring the high-frequency slot capacitance and high slot resistance. Demagnetization faults are detected by monitoring changes in the inductance.In order to improve permanent magnet synchronous machine reliability, inverter faults must be detected in addition to motor faults. A minimally-invasive technique is developed which uses the device voltage-current characteristics. By detecting small changes in the voltage-current characteristic, each device's condition is assessed and the time to failure estimation is improved.The remnant flux density of one or more rotor magnets in a PMSM can be reduced, resulting in demagnetization which occurs from either over-temperatures or excessive demagnetizing current. A magnet with reduced remnant flux density also has a reduced coercivity--it is more susceptible to further reversible demagnetization. Voltage and current harmonics can detect the presence of demagnetization, but cannot differentiate between rotor eccentricity and demagnetization in all cases. The direct axis incremental inductance can also be used to indicate demagnetization because this can be used to detect a change in the saturation characteristic. Analysis is used to show the process of demagnetization.Stator winding insulation failure has the potential outcome of catastrophic failure, cessation of operation, or the necessity for mitigation. Insulation degradation is either caused by voltage stress across the insulation or insulation thermal cycling. Because insulation degradation is reflected in the insulation's electrical characteristics, a method is presented to assess the insulation with its equivalent resistance and capacitance. A method is shown to assess the insulation condition with currents present during switching transitions induced by high dV/dt.
Show less
- Title
- Design and simulation of single-crystal diamond diodes for high voltage, high power and high temperature applications
- Creator
- Suwanmonkha, Nutthamon
- Date
- 2016
- Collection
- Electronic Theses & Dissertations
- Description
-
ABSTRACTDESIGN AND SIMULATION OF SINGLE-CRYSTAL DIAMOND DIODES FOR HIGH VOLTAGE, HIGH POWER AND HIGH TEMPERATURE APPLICATIONSByNutthamon SuwanmonkhaDiamond has exceptional properties and great potentials for making high-power semiconducting electronic devices that surpass the capabilities of other common semiconductors including silicon. The superior properties of diamond include wide bandgap, high thermal conductivity, large electric breakdown field and fast carrier mobilities. All of these...
Show moreABSTRACTDESIGN AND SIMULATION OF SINGLE-CRYSTAL DIAMOND DIODES FOR HIGH VOLTAGE, HIGH POWER AND HIGH TEMPERATURE APPLICATIONSByNutthamon SuwanmonkhaDiamond has exceptional properties and great potentials for making high-power semiconducting electronic devices that surpass the capabilities of other common semiconductors including silicon. The superior properties of diamond include wide bandgap, high thermal conductivity, large electric breakdown field and fast carrier mobilities. All of these properties are crucial for a semiconductor that is used to make electronic devices that can operate at high power levels, high voltage and high temperature.Two-dimensional semiconductor device simulation software such as Medici assists engineers to design device structures that allow the performance requirements of device applications to be met. Most physical material parameters of the well-known semiconductors are already compiled and embedded in Medici. However, diamond is not one of them. Material parameters of diamond, which include the models for incomplete ionization, temperature-and-impurity-dependent mobility, and impact ionization, are not readily available in software such as Medici. Models and data for diamond semiconductor material have been developed for Medici in the work based on results measured in the research literature and in the experimental work at Michigan State University. After equipping Medici with diamond material parameters, simulations of various diamond diodes including Schottky, PN-junction and merged Schottky/PN-junction diode structures are reported. Diodes are simulated versus changes in doping concentration, drift layer thickness and operating temperature. In particular, the diode performance metrics studied include the breakdown voltage, turn-on voltage, and specific on-resistance. The goal is to find the designs which yield low power loss and provide high voltage blocking capability. Simulation results are presented that provide insight for the design of diamond diodes using the various diode structures. Results are also reported on the use of field plate structures in the simulations to control the electric field and increase the breakdown voltage.
Show less
- Title
- Visual data representation and coding based on tensor decomposition and super-resolution
- Creator
- Mahfoodh, Abo Talib
- Date
- 2016
- Collection
- Electronic Theses & Dissertations
- Description
-
Tensor based methods have been used in a wide range of signal processing applications. A particular area of interest is tensor decomposition, which can be used to reduce the dimensionality of the massive multidimensional data. Hence, tensor decomposition can be considered as a high dimension extension of popular Singular Value Decomposition (SVD) methods used for matrix analysis. The lower dimension representation of tensors resulting from tensor decomposition can be used for classification,...
Show moreTensor based methods have been used in a wide range of signal processing applications. A particular area of interest is tensor decomposition, which can be used to reduce the dimensionality of the massive multidimensional data. Hence, tensor decomposition can be considered as a high dimension extension of popular Singular Value Decomposition (SVD) methods used for matrix analysis. The lower dimension representation of tensors resulting from tensor decomposition can be used for classification, pattern recognition, and reconstruction. Our objective in the first part of this thesis, is to develop a tensor coding framework based on a tensor decomposition method for visual data efficient representation and compression. As part of the proposed tensor coding framework, we developed a tensor decomposition algorithm that decomposed the input tensor into a set of rank-one tensors. The proposed decomposition is designed to be efficient specifically for visual data. The proposed tensor decomposition algorithm is applied in a block-wise approach. Two partitioning methods are proposed for tensor coding framework which are uniform and adaptive tree partitioning. The former subdivide a region into a set of equal size blocks while the later subdivide a region into a set of variable size blocks. The decision whether to subdivide the region or not is made based on the existing amount of the information and the overall available bitrate. A tree data structure stores the partitioning structure information which is required for the tensor reconstruction process.Furthermore, an encoder/decoder framework is proposed for compressing and storing the decomposed data. The proposed framework provides a number of desirable properties especially at the decoder side which can be critical for some applications. Low complexity reconstruction, random access, and scalability are the main properties that we have targeted. We demonstrate the viability of the proposed tensor coding framework by employing it for the representation and coding of three types of data sets: hyperspectral/multispectral images, bio-metric face image ensembles, and low motion videos. These data sets can be arranged as either three or four dimensional tensors. For each application, we show that the compression efficiency along with the inherited properties of the proposed tensor coding framework, provide a competitive approach to the current standard methods. In the second part of the thesis, we propose an example-based super-resolution algorithm for a new framework of scalable video streaming. The proposed method is applicable to scalable videos where the enhancement layer of some frames might be dropped due to changing network conditions. This leads to a streaming scenario that we call Inconsistent Scalable Video (ISV) streaming. At the decoder, the frames with the enhancement layer are used as a dictionary for super-resolving other video frames whose enhancement layers were dropped. The proposed super-resolution framework is integrated with Google VP9 video codec. Then it is applied to various High Definition (HD) videos to estimate the dropped enhancement layer. Our simulation results show an improvement visually and in terms of PSNR over traditional interpolation up-sampling filters.
Show less
- Title
- Nonlinear identification of the total baroreflex arc
- Creator
- Moslehpour, Mohsen
- Date
- 2016
- Collection
- Electronic Theses & Dissertations
- Description
-
The baroreflex is one of the most important regulatory mechanisms of blood pressure in the body, and the total baroreflex arc is defined to be the open-loop system relating carotid sinus pressure (CSP) to arterial pressure (AP). This system is known to exhibit nonlinear behaviors. However, few studies have quantitatively characterized its nonlinear dynamics. The aim of this thesis was to develop a nonlinear model of the sympathetically-mediated total arc without assuming any model form in...
Show moreThe baroreflex is one of the most important regulatory mechanisms of blood pressure in the body, and the total baroreflex arc is defined to be the open-loop system relating carotid sinus pressure (CSP) to arterial pressure (AP). This system is known to exhibit nonlinear behaviors. However, few studies have quantitatively characterized its nonlinear dynamics. The aim of this thesis was to develop a nonlinear model of the sympathetically-mediated total arc without assuming any model form in both healthy and hypertensive rats. Normal rats were studied under anesthesia. The vagal and aortic depressor nerves were sectioned, the carotid sinus regions were isolated and attached to a servo-controlled piston pump. CSP was perturbed using a Gaussian white noise signal. A second-order Volterra model was developed by applying nonparametric identification to the measurements. The second-order kernel was mainly diagonal, but the diagonal differed in shape from the first-order kernel. Hence, a reduced second-order model was similarly developed comprising a linear dynamic system in parallel with a squaring system in cascade with a slower linear dynamic system. This “Uryson” model predicted AP changes 12% better (p < 0.01) than conventional linear dynamic in response to new Gaussian white noise CSP. The model also predicted nonlinear behaviors including thresholding and mean responses to CSP changes about the mean. Spontaneously hypertensive rats were studied under the same protocol. The second-order kernel in these rats was also mainly diagonal and follows the Uryson model. The models of the total arc predicted AP 21-43% better (p < 0.005) than conventional linear dynamic models in response to a new portion of the CSP measurement. The linear and nonlinear terms of these validated models were compared to the corresponding terms of an analogous model for normotensive rats. The nonlinear gains for the hypertensive rats were significantly larger than those for the normotensive rats (e.g., gain of -0.38±0.04 (unitless) for hypertensive rats versus -0.22±0.03 for normotensive rats; p < 0.01), whereas the linear gains were similar. Hence, nonlinear dynamic functioning of the sympathetically-mediated total arc may enhance baroreflex buffering of AP increases more in spontaneously hypertensive rats than normotensive rats. The importance of higher-order nonlinear dynamics was also assessed via development and evaluation of a third-order nonlinear model of the total arc using the same experimental data. Third-order Volterra and Uryson models were developed by employing several nonparametric and parametric identification methods. The R2 values between the measured AP and AP predicted by both the best third-order Volterra and the third-order Uryson model in response to new Gaussian white noise CSP were not statistically different from the corresponding values for the previously established second-order Uryson model neither in normotensive nor in hypertensive rats. Further, none of the third-order models were able to predict important nonlinear behaviors including thresholding and saturation better than the second-order Uryson model. Additional experiments suggested that the unexplained AP variance was partly due to higher brain center activity. In conclusion, the second-order Uryson model sufficed to represent the sympathetically-mediated total arc under the employed experimental conditions and the nonlinear part of this model showed significant changes in hypertensive rats compared to normotensive rats.
Show less
- Title
- Kernel methods for biosensing applications
- Creator
- Khan, Hassan Aqeel
- Date
- 2015
- Collection
- Electronic Theses & Dissertations
- Description
-
This thesis examines the design noise robust information retrieval techniques basedon kernel methods. Algorithms are presented for two biosensing applications: (1)High throughput protein arrays and (2) Non-invasive respiratory signal estimation.Our primary objective in protein array design is to maximize the throughput byenabling detection of an extremely large number of protein targets while using aminimal number of receptor spots. This is accomplished by viewing the proteinarray as a...
Show moreThis thesis examines the design noise robust information retrieval techniques basedon kernel methods. Algorithms are presented for two biosensing applications: (1)High throughput protein arrays and (2) Non-invasive respiratory signal estimation.Our primary objective in protein array design is to maximize the throughput byenabling detection of an extremely large number of protein targets while using aminimal number of receptor spots. This is accomplished by viewing the proteinarray as a communication channel and evaluating its information transmission capacity as a function of its receptor probes. In this framework, the channel capacitycan be used as a tool to optimize probe design; the optimal probes being the onesthat maximize capacity. The information capacity is first evaluated for a small scaleprotein array, with only a few protein targets. We believe this is the first effort toevaluate the capacity of a protein array channel. For this purpose models of theproteomic channel's noise characteristics and receptor non-idealities, based on experimental prototypes, are constructed. Kernel methods are employed to extend thecapacity evaluation to larger sized protein arrays that can potentially have thousandsof distinct protein targets. A specially designed kernel which we call the ProteomicKernel is also proposed. This kernel incorporates knowledge about the biophysicsof target and receptor interactions into the cost function employed for evaluation of channel capacity.For respiratory estimation this thesis investigates estimation of breathing-rateand lung-volume using multiple non-invasive sensors under motion artifact and highnoise conditions. A spirometer signal is used as the gold standard for evaluation oferrors. A novel algorithm called the segregated envelope and carrier (SEC) estimation is proposed. This algorithm approximates the spirometer signal by an amplitudemodulated signal and segregates the estimation of the frequency and amplitude in-formation. Results demonstrate that this approach enables effective estimation ofboth breathing rate and lung volume. An adaptive algorithm based on a combination of Gini kernel machines and wavelet filltering is also proposed. This algorithm is titledthe wavelet-adaptive Gini (or WAGini) algorithm, it employs a novel wavelet trans-form based feature extraction frontend to classify the subject's underlying respiratorystate. This information is then employed to select the parameters of the adaptive kernel machine based on the subject's respiratory state. Results demonstrate significantimprovement in breathing rate estimation when compared to traditional respiratoryestimation techniques.
Show less
- Title
- Reliability improvement of DFIG-based wind energy conversion systems by real time control
- Creator
- Elhmoud, Lina Adnan Abdullah
- Date
- 2015
- Collection
- Electronic Theses & Dissertations
- Description
-
Reliability is the probability that a system or component will satisfactorily perform its intended function under given operating conditions. The average time of satisfactory operation of a system is called the mean time between failures (MTBF) and. the higher value of MTBF indicates higher reliability and vice versa. Nowadays, reliability is of greater concern than in the past especially for offshore wind turbines since the access to these installations in case of failures is both costly and...
Show moreReliability is the probability that a system or component will satisfactorily perform its intended function under given operating conditions. The average time of satisfactory operation of a system is called the mean time between failures (MTBF) and. the higher value of MTBF indicates higher reliability and vice versa. Nowadays, reliability is of greater concern than in the past especially for offshore wind turbines since the access to these installations in case of failures is both costly and difficult. Power semiconductor devices are often ranked as the most vulnerable components from reliability perspective in a power conversion system. The lifetime prediction of power modules based on mission profile is an important issue. Furthermore, lifetime modeling of future large wind turbines is needed in order to make reliability predictions in the early design phase. By conducting reliability prediction in the design phase a manufacture can ensure that the new wind turbines will operate within designed reliability metrics such as lifetime.This work presents reliability analysis of power electronic converters for wind energy conversion systems (WECS) based on semiconductor power losses. A real time control scheme is proposed to maximize the system's lifetime and the accumulated energy produced over the lifetime. It has been verified through the reliability model that a low-pass-filter-based control can effectively increase the MTBF and lifetime of the power modules. The fundamental cause to achieve higher MTBF lies in the reduction of the number of thermal cycles.The key element in a power conversion system is the power semiconductor device, which operates as a power switch. The improvement in power semiconductor devices is the critical driving force behind the improved performance, efficiency, reduced size and weight of power conversion systems. As the power density and switching frequency increase, thermal analysis of power electronic system becomes imperative. The analysis provides information on semiconductor device rating, reliability, and lifetime calculation. The power throughput of the state-of-the-art WECS that is equipped with maximum power point control algorithms is subjected to wind speed fluctuations, which may cause significant thermal cycling of the IGBT in power converter and in turn lead to reduction in lifetime. To address this reliability issue, a real-time control scheme based on the reliability model of the system is proposed. In this work a doubly fed induction generator is utilized as a demonstration system to prove the effectiveness of the proposed method. Average model of three-phase converter has been adopted for thermal modeling and lifetime estimation. A low-pass-filter based control law is utilized to modify the power command from conventional WECS control output. The resultant reliability performance of the system has been significantly improved as evidenced by the simulation results.
Show less
- Title
- Assessment of functional connectivity in the human brain : multivariate and graph signal processing methods
- Creator
- Villafañe-Delgado, Marisel
- Date
- 2017
- Collection
- Electronic Theses & Dissertations
- Description
-
"Advances in neurophysiological recording have provided a noninvasive way of inferring cognitive processes. Recent studies have shown that cognition relies on the functional integration or connectivity of segregated specialized regions in the brain. Functional connectivity quantifies the statistical relationships among different regions in the brain. However, current functional connectivity measures have certain limitations in the quantification of global integration and characterization of...
Show more"Advances in neurophysiological recording have provided a noninvasive way of inferring cognitive processes. Recent studies have shown that cognition relies on the functional integration or connectivity of segregated specialized regions in the brain. Functional connectivity quantifies the statistical relationships among different regions in the brain. However, current functional connectivity measures have certain limitations in the quantification of global integration and characterization of network structure. These limitations include the bivariate nature of most functional connectivity measures, the computational complexity of multivariate measures, and graph theoretic measures that are not robust to network size and degree distribution. Therefore, there is a need of computationally efficient and novel measures that can quantify the functional integration across brain regions and characterize the structure of these networks. This thesis makes contributions in three different areas for the assessment of multivariate functional connectivity. First, we present a novel multivariate phase synchrony measure for quantifying the common functional connectivity within different brain regions. This measure overcomes the drawbacks of bivariate functional connectivity measures and provides insights into the mechanisms of cognitive control not accountable by bivariate measures. Following the assessment of functional connectivity from a graph theoretic perspective, we propose a graph to signal transformation for both binary and weighted networks. This provides the means for characterizing the network structure and quantifying information in the graph by overcoming some drawbacks of traditional graph based measures. Finally, we introduce a new approach to studying dynamic functional connectivity networks through signals defined over networks. In this area, we define a dynamic graph Fourier transform in which a common subspace is found from the networks over time based on the tensor decomposition of the graph Laplacian over time."--Pages ii-iii.
Show less
- Title
- Brain connectivity analysis using information theory and statistical signal processing
- Creator
- Wang, Zhe (Software engineer)
- Date
- 2017
- Collection
- Electronic Theses & Dissertations
- Description
-
Connectivity between different brain regions generates our minds. Existing work on brain network analysis has mainly been focused on the characterization of connections between the regions in terms of connectivity and causality. Connectivity measures the dependence between regional brain activities, and causality analysis aims to determine the directionality of information flow among the functionally connected brain regions, and find the relationship between causes and effects.Traditionally,...
Show moreConnectivity between different brain regions generates our minds. Existing work on brain network analysis has mainly been focused on the characterization of connections between the regions in terms of connectivity and causality. Connectivity measures the dependence between regional brain activities, and causality analysis aims to determine the directionality of information flow among the functionally connected brain regions, and find the relationship between causes and effects.Traditionally, the study on connectivity and causality has largely been limited to linear relationships. In this dissertation, as an effort to achieve more accurate characterization of connections between brain regions, we aim to go beyond the linear model, and develop innovative techniques for both non-directional and directional connectivity analysis. Note that due to variability in the brain connectivity of each individual, the connectivity between two brain regions alone may not be sufficient for brain function analysis, in this research, we also conduct network connectivity pattern analysis, so as to reveal more in-depth information.First, we characterize non-directional connectivity using mutual information (MI). In recent years, MI has gradually appeared as an alternative metric for brain connectivity, since it measures both linear and non-linear dependence between two brain regions, while the traditional Pearson correlation only measures the linear dependence. We develop an innovative approach to estimate the MI between two functionally connected brain regions and apply it to brain functional magnetic resonance imaging (fMRI) data. It is shown that: on average, cognitively normal subjects show larger mutual information between critical regions than Alzheimer's disease (AD) patients.Second, we develop new methodologies for brain causality analysis based on directed information (DI). Traditionally, brain causality is based on the well-known Granger Causality (GC) analysis. The validity of GC has been widely recognized. However, it has also been noticed that GC relies heavily on the linear prediction method. When there exists strong nonlinear interactions between two regions, GC analysis may lead to invalid results. In this research, (i) we develop an innovative framework for causality analysis based on directed information (DI), which reflects the information flow from one region to another, and has no modeling constraints on the data. It is shown that DI based causality analysis is effective in capturing both linear and non-linear causal relationships. (ii) We show the conditional equivalence between the DI Framework and Friston's dynamic causal modeling (DCM), and reveal the relationship between directional information transfer and cognitive state change within the brain. Finally, based on brain network connectivity pattern analysis, we develop a robust method for the AD, mild cognitive impairment (MCI) and normal control (NC) subject classification under size limited fMRI data samples. First, we calculate the Pearson correlation coefficients between all possible ROI pairs in the selected sub-network and use them to form a feature vector for each subject. Second, we develop a regularized linear discriminant analysis (LDA) approach to reduce the noise effect. The feature vectors are then projected onto a subspace using the proposed regularized LDA, where the differences between AD, MCI and NC subjects are maximized. Finally, a multi-class AdaBoost Classifier is applied to carry out the classification task. Numerical analysis demonstrates that the combination of regularized LDA and the AdaBoost classifier can increase the classification accuracy significantly.
Show less
- Title
- Low rank models for multi-dimensional data recovery and image super-resolution
- Creator
- Al-Qizwini, Mohammed
- Date
- 2017
- Collection
- Electronic Theses & Dissertations
- Description
-
"In the past decade tremendous research efforts focused on signals with specific features, especially sparse and low rank signals. Researchers showed that these signals can be recovered from much smaller number of samples than the Nyquist rate. These efforts were promising for several applications in which the nature of the data is known to be sparse or low rank, but the available samples are much fewer than what is required by the traditional signal processing algorithms to grant an exact...
Show more"In the past decade tremendous research efforts focused on signals with specific features, especially sparse and low rank signals. Researchers showed that these signals can be recovered from much smaller number of samples than the Nyquist rate. These efforts were promising for several applications in which the nature of the data is known to be sparse or low rank, but the available samples are much fewer than what is required by the traditional signal processing algorithms to grant an exact recovery. Our objective in the first part of this thesis is to develop new algorithms for low rank data recovery from few observed samples and for robust low rank and sparse data separation using the Robust Principal Component Analysis (RPCA). Most current approaches in this class of algorithms are based on using the computationally expensive Singular Value Decomposition (SVD) in each iteration to minimize the nuclear norm. In particular, we first develop new algorithms for low rank matrix completion that are more robust to noise and converge faster than the previous algorithms. Furthermore, we generalize our recovery function to the multi-dimensional tensor domain to target the applications that deal with multi-dimensional data. Based on this generalized function, we propose a new tensor completion algorithm to recover multi-dimensional tensors from few observed samples. We also used the same generalized functions for robust tensor recovery to reconstruct the sparse and low rank tensors from the tensor that is formed by the superposition of those parts. The experimental results for this application showed that our algorithms provide comparable performance, or even outperforms, state-of-the-art matrix completion, tensor completion and robust tensor recovery algorithms; but at the same time our algorithms converge faster. The main objective of the second part of the thesis develops new algorithms for example based single image super-resolution. In this type of applications, we observe a low-resolution image and using some external "example" high-resolution - low-resolution images pairs, we recover the underlying high-resolution image. The previous efforts in this field either assumed that there is a one-to-one mapping between low-resolution and high-resolution image patches or they assumed that the high-resolution patches span the lower dimensional space. In this thesis, we propose a new algorithm that parts away from these assumptions. Our algorithm uses a subspace similarity measure to find the closes high-resolution patch to each low-resolution patch. The experimental results showed that DMCSS achieves clear visual improvements and an average of 1dB improvement in PSNR over state-of-the-art algorithms in this field. Under this thesis, we are currently pursuing other low rank and image super-resolution applications to improve the performance of our current algorithms and to find other algorithms that can run faster and perform even better."--Pages ii-iii.
Show less
- Title
- Dynamic network analysis with applications to functional neural connectivity
- Creator
- Golibagh Mahyari, Arash
- Date
- 2017
- Collection
- Electronic Theses & Dissertations
- Description
-
"Contemporary neuroimaging techniques provide neural activity recordings with increasing spatial and temporal resolution yielding rich multichannel datasets that can be exploited for detailed description of anatomical and functional connectivity patterns in the brain. Studies indicate that the changes in functional connectivity patterns across spatial and temporal scales play an important role in a wide range of cognitive and executive processes such as memory and attention as well as in the...
Show more"Contemporary neuroimaging techniques provide neural activity recordings with increasing spatial and temporal resolution yielding rich multichannel datasets that can be exploited for detailed description of anatomical and functional connectivity patterns in the brain. Studies indicate that the changes in functional connectivity patterns across spatial and temporal scales play an important role in a wide range of cognitive and executive processes such as memory and attention as well as in the understanding the causes of many neural diseases and psychopathologies such as epilepsy, Alzheimers, Parkinsons and schizophrenia. Early work in the area was limited to the analysis of static brain networks obtained through averaging long-term functional connectivity, thus neglecting possible time-varying connections. There is growing evidence that functional networks dynamically reorganize and coordinate on millisecond scale for the execution of mental processes. Functional networks consist of distinct network states, where each state is defined as a period of time during which the network topology is quasi-stationary. For this reason, there has been an interest in characterizing the dynamics of functional networks using high temporal resolution electroencephalogram recordings. In this thesis, dynamic functional connectivity networks are represented by multiway arrays, tensors, which are able to capture the complete topological structure of the networks. This thesis proposes new methods for both tracking the changes in these dynamic networks and characterizing or summarizing the network states. In order to achieve this goal, a Tucker decomposition based approach is introduced for detecting the change points for task-based electroencephalogram (EEG) functional connectivity networks through calculating the subspace distance between consecutive time steps. This is followed by a tensor-matrix projection based approach for summarizing multiple networks within a time interval. Tensor based summarization approaches do not necessarily result in sparse network and succinct states. Moreover, subspace based summarizations tend to capture the background brain activity more than the low energy sparse activations. For this reason, we propose utilizing the sparse common component and innovations (SCCI) model which simultaneously finds the sparse common component of multiple signals. However, as the number of signals in the model increases, this becomes computationally prohibitive. In this thesis, a hierarchical algorithm to recover the common component in the SCCI model is proposed for large number of signals. The hierarchical recovery of SCCI model solves the time and memory limitations at the expense of a slight decrease in the accuracy. This hierarchical model is used to separate the common and innovation components of functional connectivity networks across time. The innovation components are tracked over time to detect the change points, and the common component of the detected network states are used to obtain the network summarization. SCCI recovery algorithm finds the sparse representation of the common and innovation components of signals with respect to pre-determined dictionaries. However, input signals are not always well-represented by pre-determined dictionaries. In this thesis, a structured dictionary learning algorithm for SCCI model is developed. The proposed method is applied to EEG data collected during a study of error monitoring where two different types of brain responses are elicited in response to the stimulus. The learned dictionaries can discriminate between the response types and extract the error-related potentials (ERP) corresponding to the two responses."--Pages ii-iii.
Show less
- Title
- Role of flexibility in robotic fish
- Creator
- Bazaz Behbahani, Sanaz
- Date
- 2016
- Collection
- Electronic Theses & Dissertations
- Description
-
"Underwater creatures, especially fish, have received significant attention over the past several decades because of their fascinating swimming abilities and behaviors, which have inspired engineers to develop robots that propel and maneuver like real fish. This dissertation is focused on the role of flexibility in robotic fish performance, including the design, dynamic modeling, and experimental validation of flexible pectoral fins, flexible passive joints for pectoral fins, and fins with...
Show more"Underwater creatures, especially fish, have received significant attention over the past several decades because of their fascinating swimming abilities and behaviors, which have inspired engineers to develop robots that propel and maneuver like real fish. This dissertation is focused on the role of flexibility in robotic fish performance, including the design, dynamic modeling, and experimental validation of flexible pectoral fins, flexible passive joints for pectoral fins, and fins with actively controlled stiffness. First, the swimming performance and mechanical efficiency of flexible pectoral fins, connected to actuator shafts via rigid links, are studied, where it is found that flexible fins demonstrate advantages over rigid fins in speed and efficiency at relatively low fin-beat frequencies, while the rigid fins outperform the flexible fins at higher frequencies. The presented model offers a promising tool for the design of fin flexibility and swimming gait, to achieve speed and efficiency objectives for the robotic fish. The traditional rigid joint for pectoral fins requires different speeds for power and recovery strokes in order to produce net thrust and consequently results in control complexity and low speed performance. To address this issue, a novel flexible passive joint is presented where the fin is restricted to rowing motion during both power and recovery strokes. This joint allows the pectoral fin to sweep back passively during the recovery stroke while it follows the prescribed motion of the actuator during the power stroke, which results in net thrust even under symmetric actuation for power and recovery strokes. The dynamic model of a robotic fish equipped with such joints is developed and validated through extensive experiments. Motivated by the need for design optimization, the model is further utilized to investigate the influences of the joint length and stiffness on the robot locomotion performance and efficiency. An alternative flexible joint for pectoral fins is also proposed, which enables the pectoral fin to operate primarily in the rowing mode, while undergoing passive feathering during the recovery stroke to reduce hydrodynamic drag on the fin. A dynamic model, verified experimentally, is developed to examine the trade-off between swimming speed and mechanical efficiency in the fin design. Finally, we investigate flexible fins with actively tunable stiffness, enabled by electrorheological (ER) fluids. The tunable stiffness can be used in optimizing the robotic fish speed or maneuverability in different operating regimes. Fins with tunable stiffness are prototyped with ER fluids enclosed between layers of liquid urethane rubber (Vytaflex 10). Free oscillation and base-excited oscillation behaviors of the fins are measured underwater when different electric fields are applied for the ER fluid, which are subsequently used to develop a dynamic model for the stiffness-tunable fins."--Pages ii-iii.
Show less
- Title
- Secure and efficient spectrum sharing and QoS analysis in OFDM-based heterogeneous wireless networks
- Creator
- Alahmadi, Ahmed S.
- Date
- 2016
- Collection
- Electronic Theses & Dissertations
- Description
-
"The Internet of Things (IoT), which networks versatile devices for information exchange, remote sensing, monitoring and control, is finding promising applications in nearly every field. However, due to its high density and enormous spectrum requirement, the practical development of IoT technology seems to be not available until the release of the large millimeter wave (mmWave) band (30GHz-300GHz). Compared to existing lower band systems (such as 3G, 4G), mmWave band signals generally require...
Show more"The Internet of Things (IoT), which networks versatile devices for information exchange, remote sensing, monitoring and control, is finding promising applications in nearly every field. However, due to its high density and enormous spectrum requirement, the practical development of IoT technology seems to be not available until the release of the large millimeter wave (mmWave) band (30GHz-300GHz). Compared to existing lower band systems (such as 3G, 4G), mmWave band signals generally require line of sight (LOS) path and suffer from severe fading effects, leading to much smaller coverage area. For network design and management, this implies that: (i) MmWave band alone could not support the IoT networks, but has to be integrated with the existing lower band systems through secure and effective spectrum sharing, especially in the lower frequency bands; and (ii) The IoT networks will have very high density node distribution, which is a significant challenge in network design, especially with the scarce energy budget of IoT applications. Motivated by these observations, in this dissertation, we consider three problems: (1) How to achieve secure and effective spectrum sharing? (2) How to accommodate the energy limited IoT devices? (3) How to evaluate the Quality of Service (QoS) in the high density IoT networks? We aim to develop innovative techniques for the design, evaluation and management of future IoT networks under both benign and hostile environments. The main contributions of this dissertation are outlined as follows. First, we develop a secure and efficient spectrum sharing scheme in single-carrier wireless networks. Cognitive radio (CR) is a key enabling technology for spectrum sharing, where the unoccupied spectrum is identified for secondary users (SUs), without interfering with the primary user (PU). A serious security threat to the CR networks is referred to as primary user emulation attack (PUEA), in which a malicious user (MU) emulates the signal characteristics of the PU, thereby causing the SUs to erroneously identify the attacker as the PU. Here, we consider full-band PUEA detection and propose a reliable AES-assisted DTV scheme, where an AES-encrypted reference signal is generated at the DTV transmitter and used as the sync bits of the DTV data frames. For PU detection, we investigate the cross-correlation between the received sequence and reference sequence. The MU detection can be performed by investigating the auto-correlation of the received sequence. We further develop a secure and efficient spectrum sharing scheme in multi-carrier wireless networks. We consider sub-band malicious user detection and propose a secure AES-based DTV scheme, where the existing reference sequence used to generate the pilot symbols in the DVB-T2 frames is encrypted using the AES algorithm. The resulted sequence is exploited for accurate detection of the authorized PU and the MU. Second, we develop an energy efficient transmission scheme in CR networks using energy harvesting. We propose a transmitting scheme for the SUs such that each SU can perform information reception and energy harvesting simultaneously. We perform sum-rate optimization for the SUs under PUEA. It is observed that the sum-rate of the SU network can be improved significantly with the energy harvesting technique. Potentially, the proposed scheme can be applied directly to the energy-constrained IoT networks. Finally, we investigate QoS performance analysis methodologies, which can provide insightful feedbacks to IoT network design and planning. Taking the spatial randomness of the IoT network into consideration, we investigate coverage probability (CP) and blocking probability (BP) in relay-assisted OFDMA networks using stochastic geometry. More specifically, we model the inter-cell interference from the neighboring cells at each typical node, and derive the CP in the downlink transmissions. Based on their data rate requirements, we classify the incoming users into different classes, and calculate the BP using the multi-dimensional loss model."--Pages ii-iii.
Show less