Search results
(1 - 20 of 27)
Pages
- Title
- Advances in oscillometric blood pressure measurement
- Creator
- Chandrasekhar, Anand
- Date
- 2019
- Collection
- Electronic Theses & Dissertations
- Description
-
High blood pressure (BP) is a major cardiovascular risk factor that is treatable, yet hypertensionawareness and control rates are low. Ubiquitous BP monitoring technology could improve hypertensionmanagement, but existing devices require an inflatable cuff and are not compatible withsuch anytime, anywhere measurement of BP. Oscillometry is the blood pressure (BP) measurementprinciple of most automatic cuff devices. We extended the oscillometric principle, which is usedby most automatic cuff...
Show moreHigh blood pressure (BP) is a major cardiovascular risk factor that is treatable, yet hypertensionawareness and control rates are low. Ubiquitous BP monitoring technology could improve hypertensionmanagement, but existing devices require an inflatable cuff and are not compatible withsuch anytime, anywhere measurement of BP. Oscillometry is the blood pressure (BP) measurementprinciple of most automatic cuff devices. We extended the oscillometric principle, which is usedby most automatic cuff devices, to develop a couple of instruments to measure cuff-less BP usinga smartphone-based device and standalone iPhone application. As the user presses her/his fingeragainst the smartphone, the external pressure of the underlying artery is steadily increased while thephone measures the applied pressure and resulting variable amplitude blood volume oscillations.A smartphone application provides visual feedback to guide the amount of pressure applied overtime via the finger pressing and computes systolic and diastolic BP from the measurements.We prospectively tested the smartphone-based device for real-time BP monitoring in humansubjects to evaluate usability (n = 30) and accuracy against a standard automatic cuff-based device(n = 32). We likewise tested a finger cuff device, which uses the volume-clamp method of BPdetection. About 90% of the users learned the finger actuation required by the smartphone-baseddevice after one or two practice trials. The device yielded bias and precision errors of 3.3 and 8.8mmHg for systolic BP and [Special character(s) omitted]5:6 and 7:7 mmHg for diastolic BP over a 40 to 50 mmHg range of BP.These errors were comparable to the finger cuff device. Cuff-less and calibration-free monitoringof systolic and diastolic BP may be feasible via a smartphone. In addition, we tested the iPhoneapplication. The application yielded bias and precision errors of -4.0 and 11.4 mmHg for systolicBP and -9.4 and 9.7 mmHg for diastolic BP (n = 18). These errors were near the finger cuff deviceerrors. This proof-of-concept study surprisingly indicates that cuff-less and calibration-free BPmonitoring may be feasible with many existing and forthcoming smartphones.These devices use empirical algorithms, already descried in the literature, to estimate bloodpressure. Hence, the next objective was to establish formulas to explain three popular empiricalalgorithms- the maximum amplitude, derivative, and fixed ratio algorithms. A mathematicalmodel of the oscillogram was developed and analyzed to derive parametric formulas for explainingeach algorithm. Exemplary parameter values were obtained by fitting the model to measuredoscillograms. The model and formulas were validated by showing that their predictions correspondto measurements. The formula for the maximum amplitude algorithm indicates that it yields aweighted average of systolic and diastolic BP (0.45 and 0.55 weighting) instead of commonlyassumed mean BP. The formulas for the derivative algorithm indicate that it can accurately estimatesystolic and diastolic BP (<1.5 mmHg error), if oscillogram measurement noise can be obviated.The formulas for the fixed ratio algorithm indicate that it can yield inaccurate BP estimates, becausethe ratios change substantially (over a 0.5-0.6 range) with arterial compliance and pulse pressureand error in the assumed ratio translates to BP error via large amplification (>40). The establishedformulas allow for easy and complete interpretation of perhaps the three most popular oscillometricBP estimation algorithms in the literature while providing new insights. The model and formulasmay also be of some value towards improving the accuracy of automatic cuff BP measurementdevices.
Show less
- Title
- Integration of planning, design, and construction to train 21st century urban professionals
- Creator
- Dalton, Robert
- Date
- 2016
- Collection
- Electronic Theses & Dissertations
- Description
-
The built environment professions are struggling as budgets decrease and scope and importance increase. Attempting to save money, clients are turning to multidisciplinary offices for all-in-one service. Higher education can respond to these shifting trends by preparing the students for a growth mindset and openness to the ideas constructed by a team rather than an individual. Integrative learning may foster such minds. Integrative learning concerns the building of cognitive connections from...
Show moreThe built environment professions are struggling as budgets decrease and scope and importance increase. Attempting to save money, clients are turning to multidisciplinary offices for all-in-one service. Higher education can respond to these shifting trends by preparing the students for a growth mindset and openness to the ideas constructed by a team rather than an individual. Integrative learning may foster such minds. Integrative learning concerns the building of cognitive connections from one skill or piece of knowledge to the next. This study found cultural areas shared among professions as well as those distinct to one profession. These cultural attributes group into four categories: axiology, epistemology, methodology, and ontology. All professions rate learning (epistemology) the required skills best while they work in offices, rather than their time in higher education. Methodologies include the tasks accomplished to plan, design, and build a project and the tools used to do so. Each profession brings their own contributions to problem solving and uses varied software to accomplish their means. These contributions are highly related to the corresponding values (axiology), though mean ratings indicate a high value for a task even if it is not one’s own. The study concludes by assessing the products (ontology) that may be created by the professions most likely to work together. The teams coming together most often represent the professions of the exterior spaces, building and interior spaces, and the legal and real estate professions. Employers and educators alike may use this information to understand the differences among the professional cultures and how bridging these divides or allowing gaps to remain can impact the project delivery.
Show less
- Title
- Enhancing item pool utilization when designing multistage computerized adaptive tests
- Creator
- Yang, Lihong
- Date
- 2016
- Collection
- Electronic Theses & Dissertations
- Description
-
In recent years, the multistage adaptive test (MST) has gained increasing popularity in the field of educational measurement and operational testing. MST refers to a test in which pre-constructed sets of items are administered adaptively and are scored as a unit (Hendrickson, 2007). As a special case of Computerized Adaptive Testing (CAT), a MST program needs the following components: an item response theory (IRT) model or non-IRT-based alternatives; an item pool design; module assembly;...
Show moreIn recent years, the multistage adaptive test (MST) has gained increasing popularity in the field of educational measurement and operational testing. MST refers to a test in which pre-constructed sets of items are administered adaptively and are scored as a unit (Hendrickson, 2007). As a special case of Computerized Adaptive Testing (CAT), a MST program needs the following components: an item response theory (IRT) model or non-IRT-based alternatives; an item pool design; module assembly; ability estimation; routing algorithm; and scoring (Yan et al., 2014). A significant amount of research has been conducted on components like module assembly, ability estimation, routing and scoring, but few studies have addressed the component of item pool design. An item pool is defined as consisting of a maximal number of combinations of items that meet all content specifications for a test and provide sufficient item information for estimation at a series of ability levels (van der Linden et al., 2006). An item pool design is very important because any successful MST assembly is inseparable from an optimal item pool that provides sufficient and high-quality items (Luecht & Nungester, 1998). Reckase (2003, 2010) developed the p-optimality method to design optimal item pools using the unidimensional Rasch model in CAT, and it has been proved to be efficient for different item types and IRT models. The present study extended this method to MST context in supporting and developing different MST panel designs under different test configurations. The study compared the performance of the MST assembled under the most popularly studied panel designs in the literature, such as 1-2, 1-3, 1-2-2, and 1-2-3. A combination of short, medium and long tests with different routing test proportions were used to build up different tests. Using one of the most popularly investigated IRT models, the Rasch model, simulated optimal item pools were generated with and without practical constraints of exposure control. A total number of 72 optimal items pools were generated and the measurement accuracy was evaluated by an overall sample and conditional sample using various statistical measures. The p-optimality method was also applied in an operational MST licensure test to see if it is feasible in supporting test assembly and achieving sufficient measurement accuracy in practice. Results showed that the different MST panel designs achieved sufficient measurement accuracy by using the items from the optimal item pools built with the p-optimality method. The same was true with the operational item pool. Measurement accuracy was related to test length, but not so much to the routing test proportions. Exposure control affected the item pool size, but the distributions of the item parameters and item pool characteristics for all the MST panel designs were similar under the two conditions. The item pool sizes under the exposure control conditions were several times larger than those under no exposure control, depending on the types of MST panel designs and routing test proportions. The results from this study provide information for how to enhance item pool utilization when designing multistage computerized adaptive tests, facilitating the MST assembly process, and improving the scoring accuracy.
Show less
- Title
- Design and engineering of novel starch-based foam and film products
- Creator
- Nabar, Yogaraj Umesh
- Date
- 2004
- Collection
- Electronic Theses & Dissertations
- Title
- System identification and control design for internal combustion engine variable valve timing systems
- Creator
- Ren, Zhen
- Date
- 2011
- Collection
- Electronic Theses & Dissertations
- Description
-
Variable Valve Timing (VVT) systems are used on internal combustion engines so that they can meet stringent emission requirements, reduce fuel consumption, and increase output. Also, VVT plays a critical role in order for the engine to smoothly transit between spark ignition (SI) and homogeneous charge compression ignition (HCCI) combustion modes. In order to achieve these performance benefits and SI/HCCI transition, it is required that the VVT system be controlled accurately using a model...
Show moreVariable Valve Timing (VVT) systems are used on internal combustion engines so that they can meet stringent emission requirements, reduce fuel consumption, and increase output. Also, VVT plays a critical role in order for the engine to smoothly transit between spark ignition (SI) and homogeneous charge compression ignition (HCCI) combustion modes. In order to achieve these performance benefits and SI/HCCI transition, it is required that the VVT system be controlled accurately using a model based controller. This work studies hydraulic and electric VVT system modeling and controller design. The VVT system consists of electric, mechanical, and fluid dynamics components. Without knowledge of every component, obtaining physical-based models is not feasible. In this research, the VVT system models were obtained using system identification method. Limited by the sample rate of the crank-based camshaft position sensor, a function of engine speed, the actuator control sample rate is different from that of cam position sensor. Multi-rate system identification is a necessity for this application. On the other hand, it is also difficult to maintain the desired actuator operational condition with an open-loop control. Therefore, system identification in a closed-loop is required. In this study, Pseudo Random Binary Sequence (PRBS) q-Markov Cover identification is used to obtain the closed-loop model. The open-loop system model is calculated based on information of the closed-loop controller and identified closed-loop system model. Both open and closed-loop identifications are performed in a Hardware-In-the-Loop (HIL) simulation environment with a given reference model as a validation process. A hydraulic VVT actuator system test bench and an engine dynamometer (dyno) are used to conduct the proposed multi-rate system identification using PRBS as excitation signals. Output covariance constraint (OCC) controllers were designed based upon the identified models. Performance of the designed OCC controller was compared with those of the baseline proportional integral (PI) controller. Results show that the OCC controller uses less control effort and has less overshoot than those of PI ones. An electric VVT (EVVT) system with planetary gear system and local speed controller was modeled based on system dynamics. Simulation results of the EVVT system model provided a controller framework for the bench test. The EVVT system test bench was modified from the hydraulic VVT bench. Multi-rate closed-loop system identification was conducted on the EVVT system bench and a model based OCC controller was designed. The bench test results show that the OCC controller has a lower phase delay and lower overshoot than a tuned proportional controller, while having the same or faster response time. It is also observed that engine oil viscosity has a profound impact on the EVVT response time. The maximum response speed is saturated at a slow level if the viscosity is too high. From the bench and dyno tests, it is concluded that multi-rate closed-loop identification is a very effective way to retrieve controller design orientated VVT models. It is possible to use an OCC controller to achieve lower energy consumption, lower overshoot, and better tracking compared to PI and proportional controllers on both hydraulic and electric VVT systems.
Show less
- Title
- Targeting metabolic vulnerabilities in breast cancer subtypes
- Creator
- Ogrodzinski, Martin Peter
- Date
- 2020
- Collection
- Electronic Theses & Dissertations
- Description
-
Breast cancer is a highly prevalent and deadly disease. Globally, it is the most diagnosed cancer in women and is responsible for the most cancer-related deaths among women. Breast cancer is also a remarkably heterogeneous disease, with clear variability in clinical parameters including histological presentation, receptor status, and gene expression patterns that differ between patients. A significant amount of effort has been spent characterizing breast cancer into subtypes, with the main...
Show moreBreast cancer is a highly prevalent and deadly disease. Globally, it is the most diagnosed cancer in women and is responsible for the most cancer-related deaths among women. Breast cancer is also a remarkably heterogeneous disease, with clear variability in clinical parameters including histological presentation, receptor status, and gene expression patterns that differ between patients. A significant amount of effort has been spent characterizing breast cancer into subtypes, with the main goal of improving patient outcomes by: 1) designing targeted therapies, and 2) improving our ability to determine patient prognosis. While scientists have made significant strides in meeting these goals, we still lack targeted therapies for some subtypes of breast cancer, and current therapies often fail to provide a lasting cure. Thus, additional research is needed to improve patient care. One promising area in breast cancer research is cancer metabolism. Using metabolism as a therapeutic target is rapidly gaining traction, as it is now widely appreciated that cancer cells exhibit significant differences in metabolism compared to normal cells. The primary goal of this dissertation is to study the metabolism of distinct subtypes of breast cancer and identify metabolic vulnerabilities that can be used to effectively treat each subtype.This thesis will begin with a review of current classification strategies for breast cancer subtypes and knowledge regarding subtype-specific metabolism. It will also consider modern techniques for targeting breast cancer metabolism for therapeutic benefit. Breast cancer heterogeneity and metabolism are investigated using cell lines and tumors derived from the MMTV-Myc mouse model, which mimics the complexity observed in human disease. Cell lines derived from two histologically defined subtypes, epithelial-mesenchymal transition (EMT) and papillary, are used to establish clear metabolic profiles for each subtype. Metabolic vulnerabilities are identified in glutathione biosynthesis and the tricarboxylic acid cycle in the EMT subtype and nucleotide biosynthesis is determined to be a metabolic weakness in the papillary subtype. It is further shown that pharmacologically targeting each of these metabolic pathways has the greatest effect on reducing proliferation when used against the vulnerable subtype. These in vitro findings are then expanded upon by integrating genomic and metabolomic data acquired from in vivo tumors. In vivo experiments reveal that the EMT and papillary tumors prefer parallel pathways to generate nucleotides, with the EMT subtype preferring to salvage nucleotides while the papillary subtype prefers to produce nucleotides de novo. CRISPR/Cas9 gene editing is used to functionally characterize the metabolic effects of targeting nucleotide salvage and de novo biosynthesis in the EMT and papillary subtypes, and determine that targeting the preferred pathway of each subtype is most effective at slowing tumor growth.Overall, this work demonstrates the power of using metabolism as a therapeutic target of breast cancer, and further shows that metabolic vulnerabilities specific to individual subtypes can be used effectively to guide personalized medicine.
Show less
- Title
- Analysis and design of reliable and stable link-layer protocols for wireless communication
- Creator
- Soltani, Sohraab
- Date
- 2009
- Collection
- Electronic Theses & Dissertations
- Title
- On the evolution of mutation bias in digital organisms
- Creator
- Rupp, Matthew
- Date
- 2011
- Collection
- Electronic Theses & Dissertations
- Description
-
Mutation is one of the primary drivers of genetic change. In this work I study mutation biases, which are sets of different genetic-state inflow probabilities. Mutation biases have the potential to change the composition of genomes over time, leading to divergent short- and long-term evolutionary outcomes. I use digital organisms, self-replicating computer programs, to explore whether or not mutation biases are capable of altering the long-term adaptive behavior of populations; whether...
Show moreMutation is one of the primary drivers of genetic change. In this work I study mutation biases, which are sets of different genetic-state inflow probabilities. Mutation biases have the potential to change the composition of genomes over time, leading to divergent short- and long-term evolutionary outcomes. I use digital organisms, self-replicating computer programs, to explore whether or not mutation biases are capable of altering the long-term adaptive behavior of populations; whether mutation biases can be competitive traits; and whether mutation biases can evolve. I find that mutation biases can alter the long-term adaptive behavior of mutation bias-obligate populations in terms of both mean fitness and complex trait evolution. I also find that mutation biases can compete against one another under a variety of conditions, meaning mutation bias can selectable over relatively-short periods of time. The competitive success of a mutation bias does not always depend upon the presence of beneficial mutations, implicating an increase in the probability of neutral mutations as a sufficient mechanism for bias selection. Finally, I demonstrate that by giving organisms a mutable mutation bias allele, populations preferentially evolve to possess specific biases over others. Overall, this work shows that mutation bias can act as a selectable trait, influencing the evolution of populations with regard to both their internal-genetic and external environments.
Show less
- Title
- Diagnostic tools for improving the amount of adaptation in adaptive tests using overall and conditional indices of adaptation
- Creator
- Ju, Unhee
- Date
- 2019
- Collection
- Electronic Theses & Dissertations
- Description
-
In recent years, computerized adaptive testing (CAT) has been widely used in educational and clinical settings. The basic idea of CAT is relatively straightforward. A computer is used to administer items tailored for individuals to maximize the measurement precision of their proficiency estimates. However, the administration of CAT is not so simple. Those who administer CATs must, while trying to optimize an item selection criterion, consider a variety of practical issues such as test...
Show moreIn recent years, computerized adaptive testing (CAT) has been widely used in educational and clinical settings. The basic idea of CAT is relatively straightforward. A computer is used to administer items tailored for individuals to maximize the measurement precision of their proficiency estimates. However, the administration of CAT is not so simple. Those who administer CATs must, while trying to optimize an item selection criterion, consider a variety of practical issues such as test security, content balancing, the purpose of testing, and other test specifications. Such extraneous factors make it possible that a CAT might have so many constraints that in practice it is barely adaptive at all. This concern is at the forefront of the current study, which poses two key questions: How adaptive is a highly adaptive test really? How can the level of adaptation be improved? This study aims to develop three new statistical indicators to measure the amount of adaptation conditional on the examinees' proficiency levels in CAT. It also aims to evaluate the feasibility and utility of these adaptation measures in helping to diagnose and improve adaptivity that occurs during the CAT administration. Extending work done by Reckase, Ju, and Kim (2018), the proposed measures are based on three components-the differences in the locations between the selected items and the examinee's current proficiency estimates, the variations in the item locations administered to each examinee, and the magnitude of information that the test presents to each examinee. Hence, they can be used to assess adaptivity during the CAT process, as well as to identify differences in the level of adaptation for individuals or subgroups of examinees. To demonstrate the performance of the proposed adaptation indices, this study conducted analyses of real operational testing data from a healthcare licensure examination, as well as comprehensive simulation studies under various conditions that affect adaptivity in a CAT. The key findings of the study suggest that the proposed adaptation indices are likely to function as intended to sensitively detect the magnitude of adaptivity for a CAT over the proficiency continuum. These new measures shed light on how much adaptation of a given test occurs across individual proficiency levels or subpopulations. With some guidelines for the interpretation of these measures recommended in this study, the adaptation indices can also readily serve as diagnostic tools in practice for helping test practitioners design item pools and adaptive tests that support high adaptivity.
Show less
- Title
- Rate allocation and QoS support in wireless mesh networks
- Creator
- Wang, Bo
- Date
- 2009
- Collection
- Electronic Theses & Dissertations
- Title
- Theoretical and numerical study of swirling flow separation devices for oil-water mixtures
- Creator
- Motin, Abdul
- Date
- 2015
- Collection
- Electronic Theses & Dissertations
- Description
-
Oil-water separation is a critical aspect of produced water treatment, oil spill cleanup, and refining of petroleum products. Hydrocyclones are commonly used in these operations. A hydrocyclone is a device that separates two phases based on centrifugal forces acting on the two phases. Conventional hydrocyclones possess a finite turndown ratio and are effective for removing droplets greater than approximately ten microns. The understanding of hydrodynamic phenomena that limit the turndown...
Show moreOil-water separation is a critical aspect of produced water treatment, oil spill cleanup, and refining of petroleum products. Hydrocyclones are commonly used in these operations. A hydrocyclone is a device that separates two phases based on centrifugal forces acting on the two phases. Conventional hydrocyclones possess a finite turndown ratio and are effective for removing droplets greater than approximately ten microns. The understanding of hydrodynamic phenomena that limit the turndown ratio is crucial for improving hydrocyclone performance and finding a device that is reliable, efficient, and that has the potential to decrease the environmental footprint of oil and gas production. In this work, a quantitative understanding of the turndown ratio of an individual class hydrocyclone has been developed. A computational search is applied for redesigning the geometry of different modules of hydrocyclone. In addition, the desirable attributes of a crossflow filter and a vortex separator are combined into one unit to develop a crossflow filtration hydrocyclone (CFFH) for enhancing separation.The hydrodynamic characteristics of single and multiphase flows encountered in hydrocyclones, the trajectories of dispersed droplets, interaction of phases that involve breakup and coalescence of dispersed droplets, and the geometry and operating principles that characterize the performances of a hydrocyclone are investigated based on computational fluid dynamic (CFD) simulations using the Eulerian-Lagrangian, the Eulerian-Eulerian, and a coupled CFD-PBM (Population balance method) approaches. Results show that the finite turndown ratio in conventional hydrocyclones is a hydrodynamic effect that depends on the length of reverse flow core. Tailoring of hydrocyclone geometry with hyperbolic swirl chamber and new underflow outlet geometry significantly increases the separation efficiency and improves the turndown. Based on a parametric study, a novel hydrocyclone design is proposed that is able to achieve desired separation efficiency by a unit operation and possesses a large turndown. CFD studies were also performed on CFFH devices and showed that the swirl can aid in removing droplets from the membrane/filter surface.The novel hydrocyclone identified provides a stable reverse flow core for an increased range of feed Reynolds numbers and yields less energy loss. With increasing the feed Reynolds number, the novel hydrocyclone gradually decreases the cut size (a size of droplet having 50% separation efficiency); this does not appear in a conventional hydrocyclone. For the feed Reynolds number of 60,000, the cut size in the novel hydrocyclone is less than 10 microns whereas the conventional hydrocyclone has a cut size of 65 microns and is ineffective for droplet less than 10 microns.
Show less
- Title
- Reliable 5G system design and networking
- Creator
- Liang, Yuan (Graduate of Michigan State University)
- Date
- 2019
- Collection
- Electronic Theses & Dissertations
- Description
-
The upcoming fifth generation (5G) system is expected to support a variety of different devices and applications, such as ultra-reliable and low latency communications, Internet of Things (IoT) and mobile cloud computing. Reliable and effective communications lie in the core of the 5G system design. This dissertation is focused on the design and evaluation of robust 5G systems under both benign and malicious environments, with considerations on both the physical layer and higher layers. For...
Show moreThe upcoming fifth generation (5G) system is expected to support a variety of different devices and applications, such as ultra-reliable and low latency communications, Internet of Things (IoT) and mobile cloud computing. Reliable and effective communications lie in the core of the 5G system design. This dissertation is focused on the design and evaluation of robust 5G systems under both benign and malicious environments, with considerations on both the physical layer and higher layers. For the physical layer, we study secure and efficient 5G transceiver under hostile jamming. We propose a securely precoded OFDM (SP-OFDM) system for efficient and reliable transmission under disguised jamming, a serious threat to 5G, where the jammer intentionally confuses the receiver by mimicking the characteristics of the authorized signal, and causes complete communication failure. We bring off a dynamic constellation by introducing secure randomness between the legitimate transmitter and receiver, and hence break the symmetricity between the authorized signal and the disguised jamming. It is shown that due to the secure randomness shared between the authorized transmitter and receiver, SP-OFDM can achieve a positive channel capacity under disguised jamming. The robustness of the proposed SP-OFDM scheme under disguised jamming is demonstrated through both theoretic and numerical analyses. We further address the problem of finding the worst jamming distribution in terms of channel capacity for the SP-OFDM system. We consider a practical communication scenario, where the transmitting symbols are uniformly distributed over a discrete and finite alphabet, and the jamming interference is subject to an average power constraint, but may or may not have a peak power constraint. Using tools in functional analysis and complex analysis, first, we prove the existence and uniqueness of the worst jamming distribution. Second, by analyzing the Kuhn-Tucker conditions for the worst jamming, we prove that the worst jamming distribution is discrete in amplitude with a finite number of mass points. For the higher layers, we start with the modeling of 5G high-density heterogeneous networks. We investigate the effect of relay randomness on the end-to-end throughput in multi-hop wireless networks using stochastic geometry. We model the nodes as Poisson Point Processes and calculate the spatial average of the throughput over all potential geometrical patterns of the nodes. More specifically, for problem tractability, we first consider the simple nearest neighbor (NN) routing protocol, and analyze the end-to-end throughput so as to obtain a performance benchmark. Next, note that the ideal equal-distance routing is generally not realizable due to the randomness in relay distribution, we propose a quasi-equal-distance (QED) routing protocol. We derive the range for the optimal hop distance, and analyze the end-to-end throughput both with and without intra-route resource reuse. It is shown that the proposed QED routing protocol achieves a significant performance gain over NN routing. Finally, we consider the malicious link detection in multi-hop wireless sensor networks (WSNs), which is an important application of 5G multi-hop wireless networks. Existing work on malicious link detection generally requires that the detection process being performed at the intermediate nodes, leading to considerable overhead in system design, as well as unstable detection accuracy due to limited resources and the uncertainty in the loyalty of the intermediate nodes themselves. We propose an efficient and robust malicious link detection scheme by exploiting the statistics of packet delivery rates only at the base stations. More specifically, first, we present a secure packet transmission protocol to ensure that except the base stations, any intermediate nodes on the route cannot access the contents and routing paths of the packets. Second, we design a malicious link detection algorithm that can effectively detect the irregular dropout at every hop (or link) along the routing path with guaranteed false alarm rate and low miss detection rate.
Show less
- Title
- Efficient and secure system design in wireless communications
- Creator
- Song, Tianlong
- Date
- 2016
- Collection
- Electronic Theses & Dissertations
- Description
-
Efficient and secure information transmission lies in the core part of wireless system design and networking. Comparing with its wired counterpart, in wireless communications, the total available spectrum has to be shared by different services. Moreover, wireless transmission is more vulnerable to unauthorized detection, eavesdropping and hostile jamming due to the lack of a protective physical boundary.Today, the two most representative highly efficient communication systems are CDMA (used...
Show moreEfficient and secure information transmission lies in the core part of wireless system design and networking. Comparing with its wired counterpart, in wireless communications, the total available spectrum has to be shared by different services. Moreover, wireless transmission is more vulnerable to unauthorized detection, eavesdropping and hostile jamming due to the lack of a protective physical boundary.Today, the two most representative highly efficient communication systems are CDMA (used in 3G) and OFDM (used in 4G), and OFDM is regarded as the most efficient system. This dissertation will focus on two topics: (1) Explore more spectrally efficient system design based on the 4G OFDM scheme; (2) Investigate robust wireless system design and conduct capacity analysis under different jamming scenarios. The main results are outlined as follows.First, we develop two spectrally efficient OFDM-based multi-carrier transmission schemes: one with message-driven idle subcarriers (MC-MDIS), and the other with message-driven strengthened subcarriers (MC-MDSS). The basic idea in MC-MDIS is to carry part of the information, named carrier bits, through idle subcarrier selection while transmitting the ordinary bits regularly on all the other subcarriers. When the number of subcarriers is much larger than the adopted constellation size, higher spectral and power efficiency can be achieved comparing with OFDM. In MC-MDSS, the idle subcarriers are replaced by strengthened ones, which, unlike idle ones, can carry both carrier bits and ordinary bits. Therefore, MC-MDSS achieves even higher spectral efficiency than MC-MDIS.Second, we consider jamming-resistant OFDM system design under full-band disguised jamming, where the jamming symbols are taken from the same constellation as the information symbols over each subcarrier. It is shown that due to the symmetricity between the authorized signal and jamming, the BER of the traditional OFDM system is lower bounded by a modulation specific constant. We develop an optimal precoding scheme, which minimizes the BER of OFDM systems under full-band disguised jamming. It is shown that the most efficient way to combat full-band disguised jamming is to concentrate the total available power and distribute it uniformly over a particular number of subcarriers instead of the entire spectrum. The precoding scheme is further randomized to reinforce the system jamming resistance.Third, we consider jamming mitigation for CDMA systems under disguised jamming, where the jammer generates a fake signal using the same spreading code, constellation and pulse shaping filter as that of the authorized signal. Again, due to the symmetricity between the authorized signal and jamming, the receiver cannot really distinguish the authorized signal from jamming, leading to complete communication failure. In this research, instead of using conventional scrambling codes, we apply advanced encryption standard (AES) to generate the security-enhanced scrambling codes. Theoretical analysis shows that: the capacity of conventional CDMA systems without secure scrambling under disguised jamming is actually zero, while the capacity can be significantly increased by secure scrambling.Finally, we consider a game between a power-limited authorized user and a power-limited jammer, who operate independently over the same spectrum consisting of multiple bands. The strategic decision-making is modeled as a two-party zero-sum game, where the payoff function is the capacity that can be achieved by the authorized user in presence of the jammer. We first investigate the game under AWGN channels. It is found that: either for the authorized user to maximize its capacity, or for the jammer to minimize the capacity of the authorized user, the best strategy is to distribute the power uniformly over all the available spectrum. Then, we consider fading channels. We characterize the dynamic relationship between the optimal signal power allocation and the optimal jamming power allocation, and propose an efficient two-step water pouring algorithm to calculate them.
Show less
- Title
- Compact, low-power microelectronic instrumentation for wearable electrochemical sensor arrays in health hazard monitoring
- Creator
- Li, Haitao
- Date
- 2016
- Collection
- Electronic Theses & Dissertations
- Description
-
Biological and chemical hazards threaten human health and are of growing world concern. Wearable sensors offer the potential to monitor local exposure of individual users while enabling distribution across a global scale. However, achieving this goal is challenged by the lack of autonomous high performance sensors with the power and size features required for wearable implementation. Wearable sensors need sensing techniques having high-performance in power, sensitivity, and selectivity for...
Show moreBiological and chemical hazards threaten human health and are of growing world concern. Wearable sensors offer the potential to monitor local exposure of individual users while enabling distribution across a global scale. However, achieving this goal is challenged by the lack of autonomous high performance sensors with the power and size features required for wearable implementation. Wearable sensors need sensing techniques having high-performance in power, sensitivity, and selectivity for biological and chemical hazards within a small volume. The autonomous operation of wearable sensors demands electronics to intelligently analyze, store, and transmit the data and generate alerts, within the strict constraints of power, and size. Electrochemical sensors have many characteristics that meet the challenging performance requirements of wearable sensors. However, the electrochemical instrumentation circuits are too heavy, bulky, expensive and consume too much power for wearable applications. Modern complementary metal–oxide–semiconductor (CMOS) technology provides an ultra-small, low-cost, low-power and high-performance solution for wearable sensors. This dissertation investigates CMOS circuit design for wearable electrochemical sensor arrays in health hazard monitoring. Multiple electrochemical modes provide orthogonal data to sensor array algorithms to improve sensor sensitivity and selectivity. A unique multi-mode resource-sharing instrumentation circuit was developed to integrate amperometric and impedance sensing abilities, and share electronics components among recording channels, with reduced size, cost, and power. A wearable sensor array can measure multiple hazardous targets in a wide range of concentrations. To address the wide dynamic range of such a sensor array, a new CMOS amperometric circuit that combines digital modulation of input currents and a semi-synchronous incremental Σ∆ ADC was developed. The new circuit simultaneously achieves a combination of wide dynamic range (164 dB), high sensitivity (100 fA), high power efficiency (241 μW) and compact size (50 readout channels on a 3×3 mm2 chip) that is not available in any existing instrumentation circuits. While the circuits above addressed key challenges in gas sensors, electrochemical biosensors offer a different set of challenges. In particular, miniaturized biosensors based on nanopore interfaces, including ion channel proteins, have great potential for high-throughput biological study and wearable biosensing. However, they require electrochemical instrumentation circuits that are compact, low power, and highly sensitive, high bandwidth. To address this need, a shared-segment interleaved amperometric readout circuit was developed, and measurement results show it has superior performance in terms of power and area compared to other known current sensing circuits for the same biological targets. This circuit achieves 7.2 pArms noise in a 11.5 kHz bandwidth, over 90 nA bidirectional input current range with only 21 μW power consumption, and allowing over 400 channels to be integrated on a single chip. The combined results of this research overcome many challenges for the development of wearable electrochemical sensor array in health hazard monitoring applications.
Show less
- Title
- Ring pack behavior and oil consumption modeling in ic engines
- Creator
- Ejakov, Mikhail Aleksandrovich
- Date
- 1998
- Collection
- Electronic Theses & Dissertations
- Title
- Cognitive and motivational impacts of learning game design on middle school children
- Creator
- Akcaoglu, Mete
- Date
- 2013
- Collection
- Electronic Theses & Dissertations
- Description
-
In today`s complex and fast-evolving world, problem solving is an important skill to possess. For young children to be successful at their future careers, they need to have the skill and the
will to solve complex problems that are beyond the well-defined problems that they learn to solve at schools. One promising approach to teach complex problem solving skills is using visual programming and game design software. Theoretically and anecdotally, extant research enlightened us...
Show moreIn today`s complex and fast-evolving world, problem solving is an important skill to possess. For young children to be successful at their future careers, they need to have the skill and thewill to solve complex problems that are beyond the well-defined problems that they learn to solve at schools. One promising approach to teach complex problem solving skills is using visual programming and game design software. Theoretically and anecdotally, extant research enlightened us about the cognitive and motivational potential of these software. Due to lack of empirical evidence, however, we are far from knowing if these claims are warranted. In this quasi-experimental study, I investigated the cognitive (i.e., problem solving) and motivational (i.e., interest and value) impacts of participating at theGame Design and Learning Courses (GDL) on middle school children (n = 49), who designed games following a curriculum based on problem solving skills. Compared to students in a control group (n =24), students who attended the GDL courses showed significantly higher gains in general and specific (i.e., system analysis and design, decision-making, troubleshooting) problem solving skills. Because the survey data seriously violated statistical assumptions underlying the analyses, I could not study the motivational impacts of the GDL courses further. Nevertheless, the GDL intervention bears implications for educators and theory.
Show less
- Title
- Design and simulation of a microwave powered microplasma system for local area materials processing
- Creator
- Narendra, Jeffri Julliarsa
- Date
- 2010
- Collection
- Electronic Theses & Dissertations
- Description
-
A microwave powered microplasma source is developed and tested for materials processing on spatially localized areas. A small diameter stream of plasma (less than 2 mm in diameter) is created by focusing microwave energy inside a discharge tube. The discharge then flows out the end of the tube onto the surface being processed delivering ions and reactive radicals. The diameter of the plasma stream from the tube to the material being processed can be controlled by an aperture mounted at the...
Show moreA microwave powered microplasma source is developed and tested for materials processing on spatially localized areas. A small diameter stream of plasma (less than 2 mm in diameter) is created by focusing microwave energy inside a discharge tube. The discharge then flows out the end of the tube onto the surface being processed delivering ions and reactive radicals. The diameter of the plasma stream from the tube to the material being processed can be controlled by an aperture mounted at the end of the tube. The spot size of the localized plasma stream ranges from 2 mm down to 10's micrometers depending on the aperture size. The discharge is created by using 2.45 GHz microwave energy that is coupled into the discharge using a small foreshortened cylindrical cavity that has a hollow inner conductor and a small capacitive gap at the end of the cavity. A processing gas mixture is fed through a 2 mm inner diameter quartz tube which is located inside the hollow inner conductor of the cavity. This tube is exposed to a high electric field at the small gap end of the cavity thus generating a surface wave plasma. The length of the surface wave discharge in the tube can be extended by increasing the microwave power to the discharge so that the plasma reaches the aperture. The operating pressures range from 0.5 Torr to 100 Torr and the microwave power utilized ranges from a few Watts to 10's Watts. Several properties of the discharge including plasma power density, electron density and electron temperature are measured. The power densities of argon and Ar/O2 plasma discharges vary from 10's to over 450 W/cm3 . The plasma density and electron temperature of argon discharges are measured using a double Langmuir probe placed in the materials processing area. The plasma densities are in the range of 1011 - 1013 cm-3 .Computational modeling of the plasma discharge and the microwave excitation of the discharge is performed using a finite element analysis. The goal of the modeling study is to complement and understand the design, development and operation of the microwave powered microplasmas. A self-consistent model of the foreshortened cylindrical cavity and plasma discharge is presented with results compared to experimental measurements. The microplasma system is incorporated into a micromanufacturing system that integrates the plasma source with an atomic force microscope for surface measurements and nanomanipulation of the surface. Selected applications of the micromachining system demonstrated include using the microplasma as a spatially localized etcher, free radical source, and ultraviolet light source. Silicon and ultrananocrystalline (UNCD) diamond etching is performed using Ar/SF6 and Ar/O2 discharges, respectively, with etching rates of 0.2 - 2 μm/min and 0.6 - 2 μm/hr. Localized removal of photoresist is done by using the microplasma as a free radical source and photoresist is exposed to ultraviolet light from the microplasma source to create spatially localized patterns.
Show less
- Title
- A formal approach to providing assurance to dynamically adaptive software
- Creator
- Zhang, Ji
- Date
- 2007
- Collection
- Electronic Theses & Dissertations
- Title
- Investigation of planar terahertz passive devices and coupling methods for on-wafer applications
- Creator
- Myers, Joshua Carl
- Date
- 2015
- Collection
- Electronic Theses & Dissertations
- Description
-
In recent years, developments have pushed the cut-off frequencies of transistors near 1 THz, enabling for the first time the design of large bandwidth transmit/receive modules. While there has been a significant interest in the research community to implement these devices, many challenges have slowed such progress. Primarily, these challenges stem from the high dielectric and metal losses many materials display in the THz spectrum. However, to implement wafer-level integrated circuits in the...
Show moreIn recent years, developments have pushed the cut-off frequencies of transistors near 1 THz, enabling for the first time the design of large bandwidth transmit/receive modules. While there has been a significant interest in the research community to implement these devices, many challenges have slowed such progress. Primarily, these challenges stem from the high dielectric and metal losses many materials display in the THz spectrum. However, to implement wafer-level integrated circuits in the THz spectrum, efficient passive devices that are integration compatible must be developed. For any integrated system, many of the most important passive building blocks of the system are reduced to efficient waveguiding, filtering, and coupling between any active components, necessary measurement systems, and input sources. In this dissertation, efficient passive terahertz components, including waveguides, filters, and input couplers, are developed. First, a method of efficiently coupling THz radiation between commercial quasi-optical THz systems and integration compatible THz components is introduced. The primary method developed is the use of high-density polyethylene focusing probes which can be easily fabricated so that they are compatible with commercial THz systems. The efficiency of the probes are then investigated when used with a simple silicon-based dielectric waveguide. Next, dielectric ridge waveguides made of silicon are investigated for low loss THz wave propagation. A theoretical effective index method is applied to determine the modal propagation properties of the waveguides as well as the attenuation of the structures. FEM simulation is also carried out to verify these results. Various ridge waveguides made on silicon wafers are investigated through measurement and determined to provide low-loss waveguiding properties in the THz spectrum. The focus is then shifted to the design of thin-film integration compatible THz filters. These filters are designed with multi-objective evolutionary algorithms coupled with FEM modeling. Bandwidth, stopband characteristics, multi-resonance, and other properties of the filters are developed and improved through optimization. The filters are measured using a commercial THz system, and shown to match well with the optimized expectations.Finally, another waveguiding structure is introduced which is built with thin-metal periodic structures on thin-film substrates. These structures efficiently guide THz waves along the surface of the textured metal structures. With these structures, other passive THz circuits, such as power splitters and sensors, are also developed. The waveguiding structures, as well as power splitter, are measured in conjunction with the dielectric focusing probes developed previously, and show to provide high transmission properties at specific design frequencies. Throughout this dissertation efficient waveguides, filters, and coupling methods are introduced. These methods are compatible with current semiconductor fabrication techniques, enabling device realization directly on-wafer. In addition, all of the passive devices that are developed are simple to fabricate, as well as low-cost. Through the work presented in this dissertation, the realization of passive building blocks for on-wafer active THz circuits are developed, which in turn provides the possible realization of active on-wafer THz circuits.
Show less
- Title
- The application and evaluation of a pilot study on the effect of a self-instructional unit concerning basic design principles for selected non-art majors
- Creator
- Yoder, Walter Donald, 1933-
- Date
- 1970
- Collection
- Electronic Theses & Dissertations