You are here
Search results
(1 - 20 of 70)
Pages
- Title
- Monte-Carlo simulations of the (d,²He) reaction in inverse kinematics
- Creator
- Carls, Alexander B.
- Date
- 2019
- Collection
- Electronic Theses & Dissertations
- Description
-
Charge-exchange reactions offer an indirect method for the testing of theoretical models for Gamow-Teller strengths that are used to calculate electron-capture rates on medium-heavy nuclei, which play important roles in astrophysical phenomena. Many of the relevant nuclei are unstable. However, a good general probe for performing charge-exchange reactions in inverse kinematics in the (n,p) reaction has not yet been established. The (d,2He) reaction in inverse kinematics is being developed as...
Show moreCharge-exchange reactions offer an indirect method for the testing of theoretical models for Gamow-Teller strengths that are used to calculate electron-capture rates on medium-heavy nuclei, which play important roles in astrophysical phenomena. Many of the relevant nuclei are unstable. However, a good general probe for performing charge-exchange reactions in inverse kinematics in the (n,p) reaction has not yet been established. The (d,2He) reaction in inverse kinematics is being developed as a potential candidate for this probe. This method uses the Active-Target Time Projection Chamber (AT-TPC) to detect the two protons from the unbound 2He system, and the S800 spectrograph to detect the heavy recoil. The feasibility of this method is demonstrated through Monte-Carlo simulations. The ATTPCROOTv2 code is the framework which allows for simulation of reactions within the AT-TPC as well as digitization of the results in the pad planes for realistic simulated data. The analysis performed on this data using the ATTPCROOTv2 code shows the techniques that can be done in experiment to track the scattered protons through the detector using Random Sampling Consensus (RANSAC) algorithms.
Show less
- Title
- Design and simulation of single-crystal diamond diodes for high voltage, high power and high temperature applications
- Creator
- Suwanmonkha, Nutthamon
- Date
- 2016
- Collection
- Electronic Theses & Dissertations
- Description
-
ABSTRACTDESIGN AND SIMULATION OF SINGLE-CRYSTAL DIAMOND DIODES FOR HIGH VOLTAGE, HIGH POWER AND HIGH TEMPERATURE APPLICATIONSByNutthamon SuwanmonkhaDiamond has exceptional properties and great potentials for making high-power semiconducting electronic devices that surpass the capabilities of other common semiconductors including silicon. The superior properties of diamond include wide bandgap, high thermal conductivity, large electric breakdown field and fast carrier mobilities. All of these...
Show moreABSTRACTDESIGN AND SIMULATION OF SINGLE-CRYSTAL DIAMOND DIODES FOR HIGH VOLTAGE, HIGH POWER AND HIGH TEMPERATURE APPLICATIONSByNutthamon SuwanmonkhaDiamond has exceptional properties and great potentials for making high-power semiconducting electronic devices that surpass the capabilities of other common semiconductors including silicon. The superior properties of diamond include wide bandgap, high thermal conductivity, large electric breakdown field and fast carrier mobilities. All of these properties are crucial for a semiconductor that is used to make electronic devices that can operate at high power levels, high voltage and high temperature.Two-dimensional semiconductor device simulation software such as Medici assists engineers to design device structures that allow the performance requirements of device applications to be met. Most physical material parameters of the well-known semiconductors are already compiled and embedded in Medici. However, diamond is not one of them. Material parameters of diamond, which include the models for incomplete ionization, temperature-and-impurity-dependent mobility, and impact ionization, are not readily available in software such as Medici. Models and data for diamond semiconductor material have been developed for Medici in the work based on results measured in the research literature and in the experimental work at Michigan State University. After equipping Medici with diamond material parameters, simulations of various diamond diodes including Schottky, PN-junction and merged Schottky/PN-junction diode structures are reported. Diodes are simulated versus changes in doping concentration, drift layer thickness and operating temperature. In particular, the diode performance metrics studied include the breakdown voltage, turn-on voltage, and specific on-resistance. The goal is to find the designs which yield low power loss and provide high voltage blocking capability. Simulation results are presented that provide insight for the design of diamond diodes using the various diode structures. Results are also reported on the use of field plate structures in the simulations to control the electric field and increase the breakdown voltage.
Show less
- Title
- Unconstrained 3D face reconstruction from photo collections
- Creator
- Roth, Joseph (Software engineer)
- Date
- 2016
- Collection
- Electronic Theses & Dissertations
- Description
-
This thesis presents a novel approach for 3D face reconstruction from unconstrained photo collections. An unconstrained photo collection is a set of face images captured under an unknown and diverse variation of poses, expressions, and illuminations. The output of the proposed algorithm is a true 3D face surface model represented as a watertight triangulated surface with albedo data colloquially referred to as texture information. Reconstructing a 3D understanding of a face based on 2D input...
Show moreThis thesis presents a novel approach for 3D face reconstruction from unconstrained photo collections. An unconstrained photo collection is a set of face images captured under an unknown and diverse variation of poses, expressions, and illuminations. The output of the proposed algorithm is a true 3D face surface model represented as a watertight triangulated surface with albedo data colloquially referred to as texture information. Reconstructing a 3D understanding of a face based on 2D input is a long-standing computer vision problem. Traditional photometric stereo-based reconstruction techniques work on aligned 2D images and produce a 2.5D depth map reconstruction. We extend face reconstruction to work with a true 3D model, allowing us to enjoy the benefits of using images from all poses, up to and including profiles. To use a 3D model, we propose a novel normal field-based Laplace editing technique which allows us to deform a triangulated mesh to match the observed surface normals. Unlike prior work that require large photo collections, we formulate an approach to adapt to photo collections with few images of potentially poor quality. We achieve this through incorporating prior knowledge about face shape by fitting a 3D Morphable Model to form a personalized template before using a novel analysis-by-synthesis photometric stereo formulation to complete the fine face details. A structural similarity-based quality measure allows evaluation in the absence of ground truth 3D scans. Superior large-scale experimental results are reported on Internet, synthetic, and personal photo collections.
Show less
- Title
- Parallel computation models : representation, analysis and applications
- Creator
- Sun, Xian-He
- Date
- 1990
- Collection
- Electronic Theses & Dissertations
- Title
- The dynamics and scientific visualization for the electrophoretic deposition processing of suspended colloidal particles onto a reinforcement fiber
- Creator
- Robinson, Peter Timothy
- Date
- 1993
- Collection
- Electronic Theses & Dissertations
- Title
- A multiport approach to modeling and solving large-scale dynamic systems
- Creator
- Wang, Yanying
- Date
- 1992
- Collection
- Electronic Theses & Dissertations
- Title
- Diverse platform modeling of dynamical systems
- Creator
- Mitchell, Robert Alex
- Date
- 1991
- Collection
- Electronic Theses & Dissertations
- Title
- Reliability improvement of DFIG-based wind energy conversion systems by real time control
- Creator
- Elhmoud, Lina Adnan Abdullah
- Date
- 2015
- Collection
- Electronic Theses & Dissertations
- Description
-
Reliability is the probability that a system or component will satisfactorily perform its intended function under given operating conditions. The average time of satisfactory operation of a system is called the mean time between failures (MTBF) and. the higher value of MTBF indicates higher reliability and vice versa. Nowadays, reliability is of greater concern than in the past especially for offshore wind turbines since the access to these installations in case of failures is both costly and...
Show moreReliability is the probability that a system or component will satisfactorily perform its intended function under given operating conditions. The average time of satisfactory operation of a system is called the mean time between failures (MTBF) and. the higher value of MTBF indicates higher reliability and vice versa. Nowadays, reliability is of greater concern than in the past especially for offshore wind turbines since the access to these installations in case of failures is both costly and difficult. Power semiconductor devices are often ranked as the most vulnerable components from reliability perspective in a power conversion system. The lifetime prediction of power modules based on mission profile is an important issue. Furthermore, lifetime modeling of future large wind turbines is needed in order to make reliability predictions in the early design phase. By conducting reliability prediction in the design phase a manufacture can ensure that the new wind turbines will operate within designed reliability metrics such as lifetime.This work presents reliability analysis of power electronic converters for wind energy conversion systems (WECS) based on semiconductor power losses. A real time control scheme is proposed to maximize the system's lifetime and the accumulated energy produced over the lifetime. It has been verified through the reliability model that a low-pass-filter-based control can effectively increase the MTBF and lifetime of the power modules. The fundamental cause to achieve higher MTBF lies in the reduction of the number of thermal cycles.The key element in a power conversion system is the power semiconductor device, which operates as a power switch. The improvement in power semiconductor devices is the critical driving force behind the improved performance, efficiency, reduced size and weight of power conversion systems. As the power density and switching frequency increase, thermal analysis of power electronic system becomes imperative. The analysis provides information on semiconductor device rating, reliability, and lifetime calculation. The power throughput of the state-of-the-art WECS that is equipped with maximum power point control algorithms is subjected to wind speed fluctuations, which may cause significant thermal cycling of the IGBT in power converter and in turn lead to reduction in lifetime. To address this reliability issue, a real-time control scheme based on the reliability model of the system is proposed. In this work a doubly fed induction generator is utilized as a demonstration system to prove the effectiveness of the proposed method. Average model of three-phase converter has been adopted for thermal modeling and lifetime estimation. A low-pass-filter based control law is utilized to modify the power command from conventional WECS control output. The resultant reliability performance of the system has been significantly improved as evidenced by the simulation results.
Show less
- Title
- Predictive control of a hybrid powertrain
- Creator
- Yang, Jie
- Date
- 2015
- Collection
- Electronic Theses & Dissertations
- Description
-
Powertrain supervisory control strategy plays an important role in the overall performance of hybrid electric vehicles (HEVs), especially for fuel economy improvement. The supervisory control includes power distribution, driver demand fulfillment, battery boundary management, fuel economy optimization, emission reduction, etc. Developing an optimal control strategy is quite a challenge due to the high degrees of freedom introduced by multiple power sources in the hybrid powertrain. This...
Show morePowertrain supervisory control strategy plays an important role in the overall performance of hybrid electric vehicles (HEVs), especially for fuel economy improvement. The supervisory control includes power distribution, driver demand fulfillment, battery boundary management, fuel economy optimization, emission reduction, etc. Developing an optimal control strategy is quite a challenge due to the high degrees of freedom introduced by multiple power sources in the hybrid powertrain. This dissertation focuses on driving torque prediction, battery boundary management, and fuel economy optimization.For a hybrid powertrain, when the desired torque (driver torque demand) is outside of battery operational limits, the internal combustion (IC) engine needs to be turned on to deliver additional power (torque) to the powertrain. But the slow response of the IC engine, compared with electric motors (EMs), prevents it from providing power (torque) immediately. As a result, before the engine power is ready, the battery has to be over-discharged to provide the desired powertrain power (torque). This dissertation presents an adaptive recursive prediction algorithm to predict the future desired torque based on past and current vehicle pedal positions. The recursive nature of the prediction algorithm reduces the computational load significantly and makes it feasible for real-time implementation. Two weighting coefficients are introduced to make it possible to rely more on the data newly sampled and avoid numerical singularity. This improves the prediction accuracy greatly, and also the prediction algorithm is able to adapt to different driver behaviors and driving conditions.Based on the online-predicted desired torque and its error variance, a stochastic predictive boundary management strategy is proposed in this dissertation. The smallest upper bound of future desired torque for a given confidence level is obtained based on the predicted desired torque and prediction error variance and it is used to determine if the engine needs to be proactively turned on. That is, the engine can be ready to provide power for the “future” when the actual power (torque) demand exceeds the battery output limits. Correspondingly, the battery over-discharging duration can be reduced greatly, leading to extended battery life and improved HEV performance.To optimize powertrain fuel economy, a model predictive control (MPC) strategy is developed based on the linear quadratic tracking (LQT) approach. The finite horizon LQT control is based on the discrete-time system model obtained by linearizing the nonlinear HEV and only the first step of the solution is applied for current control. This process is repeated for each control step. The effectiveness of the supervisory control strategy is studied and validated in simulations under typical driving cycles based on a forward power split HEV model. The developed MPC-LQT control scheme tracks the predicted desired torque trajectory over the prediction horizon, minimizes the powertrain fuel consumption, maintains the battery state of charge at the desired level, and operates the battery within its designed boundary.
Show less
- Title
- Shelf life estimation of USP 10mg Prednisone calibrator tablets in relation to dissolution & new windows-based shelf life computer program
- Creator
- Yoon, Seungyil
- Date
- 2000
- Collection
- Electronic Theses & Dissertations
- Title
- Mechanisms of adaptation and speciation : an experimental study using artificial life
- Creator
- Anderson, Carlos Jesus
- Date
- 2013
- Collection
- Electronic Theses & Dissertations
- Description
-
Detailed experimental studies in evolutionary biology are sometimes difficult--even with model organisms. Theoretical models alleviate some of these difficulties and often provide clean results, but they cannot always capture the complexity of dynamic evolutionary processes. Artificial life systems are tools that fall somewhere between model organisms and theoretical models that have been successfully used to study evolutionary biology. These systems simulate simple organisms that replicate,...
Show moreDetailed experimental studies in evolutionary biology are sometimes difficult--even with model organisms. Theoretical models alleviate some of these difficulties and often provide clean results, but they cannot always capture the complexity of dynamic evolutionary processes. Artificial life systems are tools that fall somewhere between model organisms and theoretical models that have been successfully used to study evolutionary biology. These systems simulate simple organisms that replicate, acquire random mutations, and reproduce differentially; as a consequence, they evolve naturally (i.e., evolution itself is not simulated). Here I use the software Avida to study several open questions on the genetic mechanisms of adaptation and speciation.In Chapter 1 (p. 13), I investigated whether beneficial alleles during adaptation came from new mutations or standing genetic variation--alleles already present in the population. I found that most beneficial alleles came from standing genetic variation, but new mutations were necessary for long-term evolution. I also found that adaptation from standing genetic variation was faster than from new mutations. Finally, I found that recombination brought together beneficial combinations of alleles from standing genetic variation.In Chapter 2 (p. 31), I investigated the probability of compensatory adaptation vs. reversion. Compensatory adaptation is the fixation of mutations that ameliorate the effects of deleterious mutations while the original deleterious mutations remain fixed. I found that compensatory adaptation was very common, but the window of opportunity for reversion was increased when the initial fitness of the population was high, the population size was large, and the mutation rate was high. The reason that the window of opportunity for reversion was constrained was that negative epistatic interactions with compensatory mutations prevented the revertant from being beneficial to the population.In Chapter 3 (p. 58), I showed experimentally that compensatory adaptation can lead to reproductive isolation (specifically, postzygotic isolation). In addition, I found that the strength of this isolation was independent of the effect size of the original deleterious mutations. Finally, I found that both deleterious and compensatory mutations contribute equally to reproductive isolation.Reproductive isolation between populations often evolves as a byproduct of independent adaptation to new environments, but the selective pressures of these environments may be divergent (`ecological speciation') or uniform (`mutation-order speciation'). In Chapter 4 (p. 75), I compared directly the strength of postzygotic isolation generated by ecological and mutation-order processes with and without migration. I found that ecological speciation generally formed stronger isolation than mutation-order speciation and that mutation-order speciation was more sensitive to migration than ecological speciation.Under the Dobzhansky-Muller model of speciation, hybrid inviability or sterility results from the evolution of genetic incompatibilities (DMIs) between species-specific alleles. This model predicts that the number of pairwise DMIs between species should increase quadratically through time, but the few tests of this `snowball effect' have had conflicting results. In Chapter 5 (p. 101), I show that pairwise DMIs accumulated quadratically, supporting the snowball effect. I found that more complex genetic interactions involved alleles that rescued pairwise incompatibilities, explaining the discrepancy between the expected accumulations of DMIs and observation.
Show less
- Title
- The evolution of digital communities under limited resources
- Creator
- Walker, Bess Linden
- Date
- 2012
- Collection
- Electronic Theses & Dissertations
- Description
-
Schluter (1996) describes adaptive radiation as "the diversification of a lineage into species that exploit a variety of different resource types and that differ in the morphological or physiological traits used to exploit those resources". My research focuses on adaptive radiation in the context of limited resources, where frequency-dependence is an important driver of selection (Futuyma & Moreno, 1988; Dieckmann & Doebeli, 1999; Friesen et al., 2004). Adaptive radiation yields a community...
Show moreSchluter (1996) describes adaptive radiation as "the diversification of a lineage into species that exploit a variety of different resource types and that differ in the morphological or physiological traits used to exploit those resources". My research focuses on adaptive radiation in the context of limited resources, where frequency-dependence is an important driver of selection (Futuyma & Moreno, 1988; Dieckmann & Doebeli, 1999; Friesen et al., 2004). Adaptive radiation yields a community composed of distinct organism types adapted to specific niches.I study simple communities of digital organisms, the result of adaptive radiation in environments with limited resources. I ask (and address) the questions: How does diversity, driven by resource limitation, affect the frequency with which complex traits arise? What other aspects of the evolutionary pressures in this limited resource environment might account for the increase in frequency with which complex traits arise? Can we predict community stability when it encounters another community, and is our prediction different for communities resulting from adaptive radiation versus those that are artificially assembled?Community diversity is higher in environments with limited resources than in those with unlimited resources. The evolution of an example complex feature (in this case, Boolean EQU) is also more common in limited-resource environments, and shows a strong correlation with diversity over a range of resource inflow rates. I show that populations evolving in intermediate inflow rates explore areas of the fitness landscape in which EQU is common, and that those in unlimited resource environments do not. Another feature of the limited-resource environments is the reduced cost of trading off the execution of building block tasks for higher-complexity tasks. I find strong causal evidence that this reduced cost is a factor in the more common evolution of EQU in limited-resource environments.When two communities meet in competition, the fraction of each community's descendants making up the final post-competition community is strongly consistent across replicates. I find that three community-level factors, ecotypic diversity, community composition, and resource use efficiency can be used to predict this fractional community success, explaining up to 35% of the variation.In summary, I demonstrate the value of digital communities as a tractable experimental system for studying general community properties. They sit at the bridge between ecology and evolutionary biology and evolutionary computation, and offer comprehensible ways to translate ideas across these fields.
Show less
- Title
- A global modeling framework for plasma kinetics : development and applications
- Creator
- Parsey, Guy Morland
- Date
- 2017
- Collection
- Electronic Theses & Dissertations
- Description
-
The modern study of plasmas, and applications thereof, has developed synchronously with com-puter capabilities since the mid-1950s. Complexities inherent to these charged-particle, many-body, systems have resulted in the development of multiple simulation methods (particle-in-cell,fluid, global modeling, etc.) in order to both explain observed phenomena and predict outcomesof plasma applications. Recognizing that different algorithms are chosen to best address specifictopics of interest, this...
Show moreThe modern study of plasmas, and applications thereof, has developed synchronously with com-puter capabilities since the mid-1950s. Complexities inherent to these charged-particle, many-body, systems have resulted in the development of multiple simulation methods (particle-in-cell,fluid, global modeling, etc.) in order to both explain observed phenomena and predict outcomesof plasma applications. Recognizing that different algorithms are chosen to best address specifictopics of interest, this thesis centers around the development of an open-source global model frame-work for the focused study of non-equilibrium plasma kinetics. After verification and validationof the framework, it was used to study two physical phenomena: plasma-assisted combustion andthe recently proposed optically-pumped rare gas metastable laser.Global models permeate chemistry and plasma science, relying on spatial averaging to focusattention on the dynamics of reaction networks. Defined by a set of species continuity and energyconservation equations, the required data and constructed systems are conceptually similar acrossmost applications, providing a light platform for exploratory and result-search parameter scan-ning. Unfortunately, it is common practice for custom code to be developed for each application-an enormous duplication of effort which negatively affects the quality of the software produced.Presented herein, the Python-based Kinetic Global Modeling framework (KGMf) was designed tosupport all modeling phases: collection and analysis of reaction data, construction of an exportablesystem of model ODEs, and a platform for interactive evaluation and post-processing analysis. Asymbolic ODE system is constructed for interactive manipulation and generation of a Jacobian,both of which are compiled as operation-optimized C-code.Plasma-assisted combustion and ignition (PAC/PAI) embody the modernization of burning fuelby opening up new avenues of control and optimization. With applications ranging from engineefficiency and pollution control to stabilized operation of scramjet technology in hypersonic flows,developing an understanding of the underlying plasma chemistry is of the utmost importance.While the use of equilibrium (thermal) plasmas in the combustion process extends back to the ad-vent of the spark-ignition engine, works from the last few decades have demonstrated fundamentaldifferences between PAC and classical combustion theory. The KGMf is applied to nanosecond-discharge systems in order to analyze the effects of electron energy distribution assumptions onreaction kinetics and highlight the usefulness of 0D modeling in systems defined by coupled andcomplex physics.With fundamentally different principles involved, the concept of optically-pumped rare gasmetastable lasing (RGL) presents a novel opportunity for scalable high-powered lasers by takingadvantage of similarities in the electronic structure of elements while traversing the periodic ta-ble. Building from the proven concept of diode-pumped alkali vapor lasers (DPAL), RGL systemsdemonstrate remarkably similar spectral characteristics without problems associated with heatedcaustic vapors. First introduced in 2012, numerical studies on the latent kinetics remain immature.This work couples an analytic model developed for DPAL with KGMf plasma chemistry to bet-ter understand the interaction of a non-equilibrium plasma with the induced laser processes anddetermine if optical pumping could be avoided through careful discharge selection.
Show less
- Title
- Data-driven and task-specific scoring functions for predicting ligand binding poses and affinity and for screening enrichment
- Creator
- Ashtawy, Hossam Mohamed Farg
- Date
- 2017
- Collection
- Electronic Theses & Dissertations
- Description
-
Molecular modeling has become an essential tool to assist in early stages of drug discovery and development. Molecular docking, scoring, and virtual screening are three such modeling tasks of particular importance in computer-aided drug discovery. They are used to computationally simulate the interaction between small drug-like molecules, known as ligands, and a target protein whose activity is to be altered. Scoring functions (SF) are typically employed to predict the binding conformation ...
Show moreMolecular modeling has become an essential tool to assist in early stages of drug discovery and development. Molecular docking, scoring, and virtual screening are three such modeling tasks of particular importance in computer-aided drug discovery. They are used to computationally simulate the interaction between small drug-like molecules, known as ligands, and a target protein whose activity is to be altered. Scoring functions (SF) are typically employed to predict the binding conformation (docking task), binary activity label (screening task), and binding affinity (scoring task) of ligands against a critical protein in the disease's pathway. In most molecular docking software packages available today, a generic binding affinity-based (BA-based) SF is invoked for the three tasks to solve three different, but related, prediction problems. The vast majority of these predictive models are knowledge-based, empirical, or force-field scoring functions. The fourth family of SFs that has gained popularity recently and showed potential of improved accuracy is based on machine-learning (ML) approaches. Despite intense efforts in developing conventional and current ML SFs, their limited predictive accuracies in these three tasks have been a major roadblock toward cost-effective drug discovery. Therefore, in this work we present (i) novel task- specific and multi-task SFs employing large ensembles of deep neural networks (NN) and other state-of-the-art ML algorithms in conjunction with (ii) data-driven multi-perspective descriptors (features) for accurate characterization of protein-ligand complexes (PLCs) extracted using our Descriptor Data Bank (DDB) platform.We assess the docking, screening, scoring, and ranking accuracies of the proposed task-specific SFs with DDB descriptors as well as several conventional approaches in the context of the 2007 and 2014 PDBbind benchmark that encompasses a diverse set of high-quality PLCs. Our approaches substantially outperform conventional SFs based on BA and single-perspective descriptors in all tests. In terms of scoring accuracy, we find that the ensemble NN SFs, BsN-Score and BgN-Score, have more than 34% better correlation (0.844 and 0.840 vs. 0.627) between predicted and measured BAs compared to that achieved by X-Score, a top performing conventional SF. We further find that ensemble NN models surpass SFs based on other state-of-the-art ML algorithms. Similar results have been obtained for the ranking task. Within clusters of PLCs with different ligands bound to the same target protein, we find that the best ensemble NN SF is able to rank the ligands correctly 64.6% of the time compared to 57.8% obtained by X-Score. A substantial improvement in the docking task has also been achieved by our proposed docking-specific SFs. We find that the docking NN SF, BsN-Dock, has a success rate of 95% in identifying poses that are within 2 Å RMSD from the native poses of 65 different protein families. This is in comparison to a success rate of only 82% achieved by the best conventional SF, ChemPLP, employed in the commercial docking software GOLD. As for the ability to distinguish active molecules from inactives, our screening-specific SFs showed excellent improvements over the conventional approaches. The proposed SF BsN-Screen achieved a screening enrichment factor of 33.90 as opposed to 19.54 obtained from the best conventional SF, GlideScore, employed in the docking software Glide. For all tasks, we observed that the proposed task-specific SFs benefit more than their conventional counterparts from increases in the number of descriptors and training PLCs. They also perform better on novel proteins that they were never trained on before. In addition to the three task-specific SFs, we propose a novel multi-task deep neural network (MT-Net) that is trained on data from three tasks to simultaneously predict binding poses, affinities, and activity labels. MT-Net is composed of shared hidden layers for the three tasks to learn common features, task-specific hidden layers for higher feature representation, and three outputs for the three tasks. We show that the performance of MT-Net is superior to conventional SFs and competitive with other ML approaches. Based on current results and potential improvements, we believe our proposed ideas will have a transformative impact on the accuracy and outcomes of molecular docking and virtual screening.
Show less
- Title
- Adaptation to visual perturbations while learning a novel virtual reaching task
- Creator
- Narayanan, Sachin Devnathan
- Date
- 2019
- Collection
- Electronic Theses & Dissertations
- Description
-
"Introduction and Purpose: The movements we do to perform our day-to-day activities have always been riddled with perturbations, to which we adapt and learn. The studies looking at this aspect of motor learning should consider, the biomechanical differences that exist between individuals and create a novel task that can test every individual without any bias. This was achieved in our study by using a virtual environment to perform a novel motor skill in order to investigate how people learn...
Show more"Introduction and Purpose: The movements we do to perform our day-to-day activities have always been riddled with perturbations, to which we adapt and learn. The studies looking at this aspect of motor learning should consider, the biomechanical differences that exist between individuals and create a novel task that can test every individual without any bias. This was achieved in our study by using a virtual environment to perform a novel motor skill in order to investigate how people learn to adapt to perturbations. Methods: 13 college-age participants (females = 7, Mean = 21.74 +/- 2.55) performed upper body movements to control a computer cursor. Visual rotation of the cursor position was introduced as a perturbation for one-half of the practice trials. Movement time and normalized path length were calculated. One way repeated measures ANOVA was performed to analyze significance between the performance at different times of the task. Results: Significant learning seen while learning the initial baseline task (p<0.0001) and a significant drop in performance upon immediate exposure to the perturbation (p =0.005). No significant adaptation over practice with the perturbation (p = 0.103) or significant after-effects on removal of the perturbation (p = 0.383). Conclusions: Results suggests differences in adaptation when the task is novel when compared to other adaptation studies and such novel tasks trigger a different type of learning mechanism when compared to adaptation."--Page ii.
Show less
- Title
- Evolutionary dynamics of digitized organizational routines
- Creator
- Liu, Peng
- Date
- 2013
- Collection
- Electronic Theses & Dissertations
- Description
-
This dissertation explores the effects of increased digitization on the evolutionary dynamics of organizational routines. Do routines become more flexible, or more rigid, as the mix of digital technologies and human actors changes? What are the mechanisms that govern the evolution of routines? The dissertation theorizes about the effects of increased digitization on path dependence and interdependence mechanisms, and therefore extends current theory on the evolutionary dynamics of...
Show moreThis dissertation explores the effects of increased digitization on the evolutionary dynamics of organizational routines. Do routines become more flexible, or more rigid, as the mix of digital technologies and human actors changes? What are the mechanisms that govern the evolution of routines? The dissertation theorizes about the effects of increased digitization on path dependence and interdependence mechanisms, and therefore extends current theory on the evolutionary dynamics of organizational routines by taking into account the effects of three basic phenomena: digitization, path dependence and interdependence.In this dissertation, I use computer-based simulation, grounded with data collected in field interviews, to model the evolution of routines. More specifically, this dissertation models routines as networks of action that are subject to an evolutionary process of random variation and selective retention. To assess the evolution of routine, I introduce the idea of evolutionary trajectory, which is defined as the product of the magnitude of change and the direction of change in the networks of action.The dissertation also addresses a foundational issue in the literature on organizational routines. Routines are generally believed to remain stable due to path dependence. An alternative explanation is that routines may be stable due to interdependence among actions, which tends to constrain the sequence in which actions can occur. I have developed a simulation that allows meto test the relative importance of these factors, a question that has never been addressed. By addressing this fundamental issue, I provide a deeper, theory driven explanation of the effects of digitization.
Show less
- Title
- Orientation guided texture synthesis using PatchMatch
- Creator
- Dutka, Rosemary L.
- Date
- 2013
- Collection
- Electronic Theses & Dissertations
- Description
-
Texture describes the unique structural patterns that we perceive in the world. Various surface geometric details such as animal fur, plant leaves, and carpets can be thought of as texture. In computer graphics, textures stored as images are ubiquitously used to decorate boundary surfaces of objects. There are multiple approaches to acquire realistic and aesthetically pleasing textures. One of the most popular methods is a process known as texture synthesis, in which we produce seamless...
Show moreTexture describes the unique structural patterns that we perceive in the world. Various surface geometric details such as animal fur, plant leaves, and carpets can be thought of as texture. In computer graphics, textures stored as images are ubiquitously used to decorate boundary surfaces of objects. There are multiple approaches to acquire realistic and aesthetically pleasing textures. One of the most popular methods is a process known as texture synthesis, in which we produce seamless nonrepetitive textures from a small patch of texture sample.In this thesis, we present an orientation guided fast texture synthesis based on an image editing tool, PatchMatch, which is included in PhotoShop. Given an example image, our model adopts a hierarchical process to improve retention of structural texture features at multiple scales. We generalize PatchMatch by using orientation to guide the alignment of texture features, indicated by a planar direction field, in the creation of large texture patches. To demonstrate the effectiveness of our approach, we first apply our algorithm in designing new textures with two and four-way symmetry which can be extended to n-way symmetry, and then in enhancing latent fingerprints. Furthermore, our results show empirically that orientation guided PatchMatch has the advantages of providing control over the density of singularities without knowing the exact locations and reducing spurious singularities.
Show less
- Title
- Computational chemistry : investigations of protein-protein interactions and post-translational modifications to peptides
- Creator
- Jones, Michael R. (Graduate of Michigan State University)
- Date
- 2017
- Collection
- Electronic Theses & Dissertations
- Description
-
Computational chemistry plays a vital role in understanding chemical and physical processes and has been useful in advancing the understanding of reactions in biology. Improper signaling of the nuclear factor-κB (NF-κB) pathway plays a critical role in many inflammatory disease states, including cancer, stroke, and viral infections. Aberrant regulation of this pathway happens upon the signal-induced degradation of the inhibitor of κB (IκB) proteins. The activation of IκB kinase (IKK) subunit...
Show moreComputational chemistry plays a vital role in understanding chemical and physical processes and has been useful in advancing the understanding of reactions in biology. Improper signaling of the nuclear factor-κB (NF-κB) pathway plays a critical role in many inflammatory disease states, including cancer, stroke, and viral infections. Aberrant regulation of this pathway happens upon the signal-induced degradation of the inhibitor of κB (IκB) proteins. The activation of IκB kinase (IKK) subunit β (IKKβ) or NF-κB Inducing Kinase (NIK), initiates this cascade of events. Understanding the structure-property relationships associated with IKKβ and NIK is essential for the development of prevention strategies. Although the signaling pathways are known, how the molecular mechanisms respond to changes in the intracellular microenvironment (i.e., pH, ionic strength, temperature) remains elusive. In this dissertation, computer simulation and modeling techniques were used investigate two protein kinases complexed with either small molecule activators or inhibitors in the active, inactive, and mutant states to correlate structure-property and structure-function relationships as a function of intracellular ionic strength. Additionally, radical-induced protein fragmentation pathways, as a result of reactions with reactive oxygen species, were investigated to yield insight into the thermodynamic preference of the fragmentation mechanisms. Analyses of the relationship between structure-activity and conformational-activity indicate that the protein-protein interactions and the binding of small molecules are sensitive to changes in the ionic strength and that there are several factors that influence the selectivity of peptide backbone cleavage. As there are many computational approaches for predicting physical and chemical properties, several methods were considered for the predictions of protein-protein dissociation, protein backbone fragmentation, and partition coefficients of drug-like molecules.
Show less
- Title
- Validation of two growth and yield models on red pine plantations in Michigan
- Creator
- Smith-Mateja, Erin E.
- Date
- 2003
- Collection
- Electronic Theses & Dissertations
- Title
- A particle-in-cell method for the simulation of plasmas based on an unconditionally stable wave equation solver
- Creator
- Wolf, Eric Matthew
- Date
- 2015
- Collection
- Electronic Theses & Dissertations
- Description
-
In this dissertation, we present a particle-in-cell method for the simulation of plasmas based on an unconditionally stable solver for the second-order scalar wave equation, that is, a wave equation solver that is not subject to a Courant-Friedrichs-Lewy (CFL) stability restriction, typical of explicit methods, while maintaining a computational cost and code complexity comparable to such explicit solvers. This permits the use of a time step size many times larger than allowed by widely-used...
Show moreIn this dissertation, we present a particle-in-cell method for the simulation of plasmas based on an unconditionally stable solver for the second-order scalar wave equation, that is, a wave equation solver that is not subject to a Courant-Friedrichs-Lewy (CFL) stability restriction, typical of explicit methods, while maintaining a computational cost and code complexity comparable to such explicit solvers. This permits the use of a time step size many times larger than allowed by widely-used explicit methods.
Show less