You are here
Search results
(1 - 20 of 68)
Pages
- Title
- Monte-Carlo simulations of the (d,²He) reaction in inverse kinematics
- Creator
- Carls, Alexander B.
- Date
- 2019
- Collection
- Electronic Theses & Dissertations
- Description
-
Charge-exchange reactions offer an indirect method for the testing of theoretical models for Gamow-Teller strengths that are used to calculate electron-capture rates on medium-heavy nuclei, which play important roles in astrophysical phenomena. Many of the relevant nuclei are unstable. However, a good general probe for performing charge-exchange reactions in inverse kinematics in the (n,p) reaction has not yet been established. The (d,2He) reaction in inverse kinematics is being developed as...
Show moreCharge-exchange reactions offer an indirect method for the testing of theoretical models for Gamow-Teller strengths that are used to calculate electron-capture rates on medium-heavy nuclei, which play important roles in astrophysical phenomena. Many of the relevant nuclei are unstable. However, a good general probe for performing charge-exchange reactions in inverse kinematics in the (n,p) reaction has not yet been established. The (d,2He) reaction in inverse kinematics is being developed as a potential candidate for this probe. This method uses the Active-Target Time Projection Chamber (AT-TPC) to detect the two protons from the unbound 2He system, and the S800 spectrograph to detect the heavy recoil. The feasibility of this method is demonstrated through Monte-Carlo simulations. The ATTPCROOTv2 code is the framework which allows for simulation of reactions within the AT-TPC as well as digitization of the results in the pad planes for realistic simulated data. The analysis performed on this data using the ATTPCROOTv2 code shows the techniques that can be done in experiment to track the scattered protons through the detector using Random Sampling Consensus (RANSAC) algorithms.
Show less
- Title
- Advances in metal ion modeling
- Creator
- Li, Pengfei (Chemist)
- Date
- 2016
- Collection
- Electronic Theses & Dissertations
- Description
-
Metal ions play fundamental roles in geochemistry, biochemistry and materials science.With the tremendous increasing power of the computational resources and largelyinventions of the computational tools, computational chemistry became a more and moreimportant tool to study various chemical processes. Force field modeling strategy, whichis built on physical background, offered a fast way to study chemical systems at atomiclevel. It could offer considerable accuracy when combined with the Monte...
Show moreMetal ions play fundamental roles in geochemistry, biochemistry and materials science.With the tremendous increasing power of the computational resources and largelyinventions of the computational tools, computational chemistry became a more and moreimportant tool to study various chemical processes. Force field modeling strategy, whichis built on physical background, offered a fast way to study chemical systems at atomiclevel. It could offer considerable accuracy when combined with the Monte Carlo orMolecular Dynamics simulation protocol. However, there are various metal ions and it isstill challenging to model them using available force field models. Generally there areseveral models available for modeling metal ions using the force field approach such asthe nonbonded model, the bonded model, the cationic dummy atom model, the combinedmodel, and the polarizable models. Our work concentrated on the nonbonded and bondedmodels, which are widely used nowadays. Firstly, we focused on filling in the blanks ofthis field. We proposed a noble gas curve, which was used to describe the relationshipbetween the van der Waals radius and well depth parameters in the 12-6 Lennard-Jonespotential. By using the noble gas curve and multiple target values (the hydration freeenergy, ion-oxygen distance, coordination number values), we have consistentlyparameterized the 12-6 Lennard-Jones nonbonded model for 63 different ions (including11 monovalent cations, 4 monovalent anions, 24 divalent cations, 18 trivalent cations,and 6 tetravalent cations) combined with three widely used water models (TIP3P, SPC/E, and TIP4PEW). Secondly, we found there is limited accuracy of the 12-6 model, whichmakes it hard to simulate different properties simultaneously for ions with formal chargeequal or larger than +2. By considering the physical origins of the 12-6 model, weproposed a new nonbonded model, named the 12-6-4 LJ-type nonbonded model. Wehave systematically parameterized the 12-6-4 model for 55 different ions (including 11monovalent cations, 4 monovalent anions, 16 divalent cations, 18 trivalent cations, and 6tetravalent cations) in the three water models. It was shown that the 12-6-4 model couldreproduce several properties at the same time, showing remarkable improvement over the12-6 model. Meanwhile, through the usage of a proposed combining rule, the 12-6-4model showed excellent transferability to mixed systems. Thirdly, we have developed theMCPB.py program to facilitate building of the bonded model for metal ion containingsystems, which can largely reduce human efforts. Finally, an application case of ametallochaperone - CusF was shown, and based on the simulations we hypothesized anion transfer mechanism.
Show less
- Title
- Design and simulation of single-crystal diamond diodes for high voltage, high power and high temperature applications
- Creator
- Suwanmonkha, Nutthamon
- Date
- 2016
- Collection
- Electronic Theses & Dissertations
- Description
-
ABSTRACTDESIGN AND SIMULATION OF SINGLE-CRYSTAL DIAMOND DIODES FOR HIGH VOLTAGE, HIGH POWER AND HIGH TEMPERATURE APPLICATIONSByNutthamon SuwanmonkhaDiamond has exceptional properties and great potentials for making high-power semiconducting electronic devices that surpass the capabilities of other common semiconductors including silicon. The superior properties of diamond include wide bandgap, high thermal conductivity, large electric breakdown field and fast carrier mobilities. All of these...
Show moreABSTRACTDESIGN AND SIMULATION OF SINGLE-CRYSTAL DIAMOND DIODES FOR HIGH VOLTAGE, HIGH POWER AND HIGH TEMPERATURE APPLICATIONSByNutthamon SuwanmonkhaDiamond has exceptional properties and great potentials for making high-power semiconducting electronic devices that surpass the capabilities of other common semiconductors including silicon. The superior properties of diamond include wide bandgap, high thermal conductivity, large electric breakdown field and fast carrier mobilities. All of these properties are crucial for a semiconductor that is used to make electronic devices that can operate at high power levels, high voltage and high temperature.Two-dimensional semiconductor device simulation software such as Medici assists engineers to design device structures that allow the performance requirements of device applications to be met. Most physical material parameters of the well-known semiconductors are already compiled and embedded in Medici. However, diamond is not one of them. Material parameters of diamond, which include the models for incomplete ionization, temperature-and-impurity-dependent mobility, and impact ionization, are not readily available in software such as Medici. Models and data for diamond semiconductor material have been developed for Medici in the work based on results measured in the research literature and in the experimental work at Michigan State University. After equipping Medici with diamond material parameters, simulations of various diamond diodes including Schottky, PN-junction and merged Schottky/PN-junction diode structures are reported. Diodes are simulated versus changes in doping concentration, drift layer thickness and operating temperature. In particular, the diode performance metrics studied include the breakdown voltage, turn-on voltage, and specific on-resistance. The goal is to find the designs which yield low power loss and provide high voltage blocking capability. Simulation results are presented that provide insight for the design of diamond diodes using the various diode structures. Results are also reported on the use of field plate structures in the simulations to control the electric field and increase the breakdown voltage.
Show less
- Title
- Unconstrained 3D face reconstruction from photo collections
- Creator
- Roth, Joseph (Software engineer)
- Date
- 2016
- Collection
- Electronic Theses & Dissertations
- Description
-
This thesis presents a novel approach for 3D face reconstruction from unconstrained photo collections. An unconstrained photo collection is a set of face images captured under an unknown and diverse variation of poses, expressions, and illuminations. The output of the proposed algorithm is a true 3D face surface model represented as a watertight triangulated surface with albedo data colloquially referred to as texture information. Reconstructing a 3D understanding of a face based on 2D input...
Show moreThis thesis presents a novel approach for 3D face reconstruction from unconstrained photo collections. An unconstrained photo collection is a set of face images captured under an unknown and diverse variation of poses, expressions, and illuminations. The output of the proposed algorithm is a true 3D face surface model represented as a watertight triangulated surface with albedo data colloquially referred to as texture information. Reconstructing a 3D understanding of a face based on 2D input is a long-standing computer vision problem. Traditional photometric stereo-based reconstruction techniques work on aligned 2D images and produce a 2.5D depth map reconstruction. We extend face reconstruction to work with a true 3D model, allowing us to enjoy the benefits of using images from all poses, up to and including profiles. To use a 3D model, we propose a novel normal field-based Laplace editing technique which allows us to deform a triangulated mesh to match the observed surface normals. Unlike prior work that require large photo collections, we formulate an approach to adapt to photo collections with few images of potentially poor quality. We achieve this through incorporating prior knowledge about face shape by fitting a 3D Morphable Model to form a personalized template before using a novel analysis-by-synthesis photometric stereo formulation to complete the fine face details. A structural similarity-based quality measure allows evaluation in the absence of ground truth 3D scans. Superior large-scale experimental results are reported on Internet, synthetic, and personal photo collections.
Show less
- Title
- Parallel computation models : representation, analysis and applications
- Creator
- Sun, Xian-He
- Date
- 1990
- Collection
- Electronic Theses & Dissertations
- Title
- The dynamics and scientific visualization for the electrophoretic deposition processing of suspended colloidal particles onto a reinforcement fiber
- Creator
- Robinson, Peter Timothy
- Date
- 1993
- Collection
- Electronic Theses & Dissertations
- Title
- A multiport approach to modeling and solving large-scale dynamic systems
- Creator
- Wang, Yanying
- Date
- 1992
- Collection
- Electronic Theses & Dissertations
- Title
- Diverse platform modeling of dynamical systems
- Creator
- Mitchell, Robert Alex
- Date
- 1991
- Collection
- Electronic Theses & Dissertations
- Title
- Reliability improvement of DFIG-based wind energy conversion systems by real time control
- Creator
- Elhmoud, Lina Adnan Abdullah
- Date
- 2015
- Collection
- Electronic Theses & Dissertations
- Description
-
Reliability is the probability that a system or component will satisfactorily perform its intended function under given operating conditions. The average time of satisfactory operation of a system is called the mean time between failures (MTBF) and. the higher value of MTBF indicates higher reliability and vice versa. Nowadays, reliability is of greater concern than in the past especially for offshore wind turbines since the access to these installations in case of failures is both costly and...
Show moreReliability is the probability that a system or component will satisfactorily perform its intended function under given operating conditions. The average time of satisfactory operation of a system is called the mean time between failures (MTBF) and. the higher value of MTBF indicates higher reliability and vice versa. Nowadays, reliability is of greater concern than in the past especially for offshore wind turbines since the access to these installations in case of failures is both costly and difficult. Power semiconductor devices are often ranked as the most vulnerable components from reliability perspective in a power conversion system. The lifetime prediction of power modules based on mission profile is an important issue. Furthermore, lifetime modeling of future large wind turbines is needed in order to make reliability predictions in the early design phase. By conducting reliability prediction in the design phase a manufacture can ensure that the new wind turbines will operate within designed reliability metrics such as lifetime.This work presents reliability analysis of power electronic converters for wind energy conversion systems (WECS) based on semiconductor power losses. A real time control scheme is proposed to maximize the system's lifetime and the accumulated energy produced over the lifetime. It has been verified through the reliability model that a low-pass-filter-based control can effectively increase the MTBF and lifetime of the power modules. The fundamental cause to achieve higher MTBF lies in the reduction of the number of thermal cycles.The key element in a power conversion system is the power semiconductor device, which operates as a power switch. The improvement in power semiconductor devices is the critical driving force behind the improved performance, efficiency, reduced size and weight of power conversion systems. As the power density and switching frequency increase, thermal analysis of power electronic system becomes imperative. The analysis provides information on semiconductor device rating, reliability, and lifetime calculation. The power throughput of the state-of-the-art WECS that is equipped with maximum power point control algorithms is subjected to wind speed fluctuations, which may cause significant thermal cycling of the IGBT in power converter and in turn lead to reduction in lifetime. To address this reliability issue, a real-time control scheme based on the reliability model of the system is proposed. In this work a doubly fed induction generator is utilized as a demonstration system to prove the effectiveness of the proposed method. Average model of three-phase converter has been adopted for thermal modeling and lifetime estimation. A low-pass-filter based control law is utilized to modify the power command from conventional WECS control output. The resultant reliability performance of the system has been significantly improved as evidenced by the simulation results.
Show less
- Title
- Predictive control of a hybrid powertrain
- Creator
- Yang, Jie
- Date
- 2015
- Collection
- Electronic Theses & Dissertations
- Description
-
Powertrain supervisory control strategy plays an important role in the overall performance of hybrid electric vehicles (HEVs), especially for fuel economy improvement. The supervisory control includes power distribution, driver demand fulfillment, battery boundary management, fuel economy optimization, emission reduction, etc. Developing an optimal control strategy is quite a challenge due to the high degrees of freedom introduced by multiple power sources in the hybrid powertrain. This...
Show morePowertrain supervisory control strategy plays an important role in the overall performance of hybrid electric vehicles (HEVs), especially for fuel economy improvement. The supervisory control includes power distribution, driver demand fulfillment, battery boundary management, fuel economy optimization, emission reduction, etc. Developing an optimal control strategy is quite a challenge due to the high degrees of freedom introduced by multiple power sources in the hybrid powertrain. This dissertation focuses on driving torque prediction, battery boundary management, and fuel economy optimization.For a hybrid powertrain, when the desired torque (driver torque demand) is outside of battery operational limits, the internal combustion (IC) engine needs to be turned on to deliver additional power (torque) to the powertrain. But the slow response of the IC engine, compared with electric motors (EMs), prevents it from providing power (torque) immediately. As a result, before the engine power is ready, the battery has to be over-discharged to provide the desired powertrain power (torque). This dissertation presents an adaptive recursive prediction algorithm to predict the future desired torque based on past and current vehicle pedal positions. The recursive nature of the prediction algorithm reduces the computational load significantly and makes it feasible for real-time implementation. Two weighting coefficients are introduced to make it possible to rely more on the data newly sampled and avoid numerical singularity. This improves the prediction accuracy greatly, and also the prediction algorithm is able to adapt to different driver behaviors and driving conditions.Based on the online-predicted desired torque and its error variance, a stochastic predictive boundary management strategy is proposed in this dissertation. The smallest upper bound of future desired torque for a given confidence level is obtained based on the predicted desired torque and prediction error variance and it is used to determine if the engine needs to be proactively turned on. That is, the engine can be ready to provide power for the “future” when the actual power (torque) demand exceeds the battery output limits. Correspondingly, the battery over-discharging duration can be reduced greatly, leading to extended battery life and improved HEV performance.To optimize powertrain fuel economy, a model predictive control (MPC) strategy is developed based on the linear quadratic tracking (LQT) approach. The finite horizon LQT control is based on the discrete-time system model obtained by linearizing the nonlinear HEV and only the first step of the solution is applied for current control. This process is repeated for each control step. The effectiveness of the supervisory control strategy is studied and validated in simulations under typical driving cycles based on a forward power split HEV model. The developed MPC-LQT control scheme tracks the predicted desired torque trajectory over the prediction horizon, minimizes the powertrain fuel consumption, maintains the battery state of charge at the desired level, and operates the battery within its designed boundary.
Show less
- Title
- Three-dimensional dynamic motion of the shoulder complex
- Creator
- Reid, Tamara Ann
- Date
- 1994
- Collection
- Electronic Theses & Dissertations
- Title
- Using top down multiport modeling for automotive applications
- Creator
- Minor, Mark Andrew
- Date
- 1996
- Collection
- Electronic Theses & Dissertations
- Title
- Ring pack behavior and oil consumption modeling in ic engines
- Creator
- Ejakov, Mikhail Aleksandrovich
- Date
- 1998
- Collection
- Electronic Theses & Dissertations
- Title
- Modeling of accelerator systems and experimental verification of Quarter-Wave Resonator steering
- Creator
- Benatti, Carla
- Date
- 2014
- Collection
- Electronic Theses & Dissertations
- Description
-
Increasingly complicated accelerator systems depend more and more on computing power and computer simulations for their operation as progress in the field has led to cutting-edge advances that require finer control and better understanding to achieve optimal performance. Greater ambitions coupled with the technical complexity of today's state-of-the-art accelerators necessitate corresponding advances in available accelerator modeling resources. Modeling is a critical component of any field of...
Show moreIncreasingly complicated accelerator systems depend more and more on computing power and computer simulations for their operation as progress in the field has led to cutting-edge advances that require finer control and better understanding to achieve optimal performance. Greater ambitions coupled with the technical complexity of today's state-of-the-art accelerators necessitate corresponding advances in available accelerator modeling resources. Modeling is a critical component of any field of physics, accelerator physics being no exception. It is extremely important to not only understand the basic underlying physics principles but to implement this understanding through the development of relevant modeling tools that provide the ability to investigate and study various complex effects. Moreover, these tools can lead to new insight and applications that facilitate control room operations and enable advances in the field that would not otherwise be possible. The ability to accurately model accelerator systems aids in the successful operation of machines designed specifically to deliver beams to experiments across a wide variety of fields, ranging from material science research to nuclear astrophysics. One such accelerator discussed throughout this work is the ReA facility at the National Superconducting Cyclotron Laboratory (NSCL) which re-accelerates rare isotope beams for nuclear astrophysics experiments. A major component of the ReA facility, as well as the future Facility for Rare Isotope Beams (FRIB) among other accelerators, is the Quarter-Wave Resonator (QWR), a coaxial accelerating cavity convenient for efficient acceleration of low-velocity particles. This device is very important to model accurately as it operates in the critical low-velocity region where the beam's acceleration gains are proportionally larger than they are through the later stages of acceleration. Compounding this matter, QWRs defocus the beam, and are also asymmetric with respect to the beam pipe, which has the potential to induce steering on the beam. These additional complications make this a significant device to study in order to optimize the accelerator's overall performance. The NSCL and ReA, along with FRIB, are first introduced to provide background and motivate the central modeling objectives presented throughout this work. In the next chapter, underlying beam physics principles are then discussed, as they form the basis from which modeling methods are derived. The modeling methods presented include multi-particle tracking and beam envelope matrix transport. The following chapter investigates modeling elements in more detail, including quadrupoles, solenoids, and coaxial accelerating cavities. Assemblies of accelerator elements, or lattices, have been modeled as well, and a method for modeling multiple charge state transport using linear matrix methods is also given.Finally, an experiment studying beam steering induced by QWR resonators is presented, the first systematic experimental investigation of this effect. As mentioned earlier, characterization of this steering on beam properties is important for accurate modeling of the beam transport through the linac. The measurement technique devised at ReA investigates the effect's dependence on the beam's vertical offset within the cavity, the cavity amplitude, and the beam energy upon entrance into the cavity. The results from this experiment agree well with the analytical predictions based on geometrical parameters calculated from on-axis field profiles. The incorporation of this effect into modeling codes has the potential to speed up complex accelerator operations and tuning procedures in systems using QWRs.
Show less
- Title
- Two-dimensional drafting template and three-dimensional computer model representing the average adult male in automotive seated postures
- Creator
- Bush, Neil James
- Date
- 1992
- Collection
- Electronic Theses & Dissertations
- Title
- Evaluation of calibration for optical see-through augmented reality systems
- Creator
- McGarrity, Erin Scott
- Date
- 2001
- Collection
- Electronic Theses & Dissertations
- Title
- Shelf life estimation of USP 10mg Prednisone calibrator tablets in relation to dissolution & new windows-based shelf life computer program
- Creator
- Yoon, Seungyil
- Date
- 2000
- Collection
- Electronic Theses & Dissertations
- Title
- The evolution of digital communities under limited resources
- Creator
- Walker, Bess Linden
- Date
- 2012
- Collection
- Electronic Theses & Dissertations
- Description
-
Schluter (1996) describes adaptive radiation as "the diversification of a lineage into species that exploit a variety of different resource types and that differ in the morphological or physiological traits used to exploit those resources". My research focuses on adaptive radiation in the context of limited resources, where frequency-dependence is an important driver of selection (Futuyma & Moreno, 1988; Dieckmann & Doebeli, 1999; Friesen et al., 2004). Adaptive radiation yields a community...
Show moreSchluter (1996) describes adaptive radiation as "the diversification of a lineage into species that exploit a variety of different resource types and that differ in the morphological or physiological traits used to exploit those resources". My research focuses on adaptive radiation in the context of limited resources, where frequency-dependence is an important driver of selection (Futuyma & Moreno, 1988; Dieckmann & Doebeli, 1999; Friesen et al., 2004). Adaptive radiation yields a community composed of distinct organism types adapted to specific niches.I study simple communities of digital organisms, the result of adaptive radiation in environments with limited resources. I ask (and address) the questions: How does diversity, driven by resource limitation, affect the frequency with which complex traits arise? What other aspects of the evolutionary pressures in this limited resource environment might account for the increase in frequency with which complex traits arise? Can we predict community stability when it encounters another community, and is our prediction different for communities resulting from adaptive radiation versus those that are artificially assembled?Community diversity is higher in environments with limited resources than in those with unlimited resources. The evolution of an example complex feature (in this case, Boolean EQU) is also more common in limited-resource environments, and shows a strong correlation with diversity over a range of resource inflow rates. I show that populations evolving in intermediate inflow rates explore areas of the fitness landscape in which EQU is common, and that those in unlimited resource environments do not. Another feature of the limited-resource environments is the reduced cost of trading off the execution of building block tasks for higher-complexity tasks. I find strong causal evidence that this reduced cost is a factor in the more common evolution of EQU in limited-resource environments.When two communities meet in competition, the fraction of each community's descendants making up the final post-competition community is strongly consistent across replicates. I find that three community-level factors, ecotypic diversity, community composition, and resource use efficiency can be used to predict this fractional community success, explaining up to 35% of the variation.In summary, I demonstrate the value of digital communities as a tractable experimental system for studying general community properties. They sit at the bridge between ecology and evolutionary biology and evolutionary computation, and offer comprehensible ways to translate ideas across these fields.
Show less
- Title
- A global modeling framework for plasma kinetics : development and applications
- Creator
- Parsey, Guy Morland
- Date
- 2017
- Collection
- Electronic Theses & Dissertations
- Description
-
The modern study of plasmas, and applications thereof, has developed synchronously with com-puter capabilities since the mid-1950s. Complexities inherent to these charged-particle, many-body, systems have resulted in the development of multiple simulation methods (particle-in-cell,fluid, global modeling, etc.) in order to both explain observed phenomena and predict outcomesof plasma applications. Recognizing that different algorithms are chosen to best address specifictopics of interest, this...
Show moreThe modern study of plasmas, and applications thereof, has developed synchronously with com-puter capabilities since the mid-1950s. Complexities inherent to these charged-particle, many-body, systems have resulted in the development of multiple simulation methods (particle-in-cell,fluid, global modeling, etc.) in order to both explain observed phenomena and predict outcomesof plasma applications. Recognizing that different algorithms are chosen to best address specifictopics of interest, this thesis centers around the development of an open-source global model frame-work for the focused study of non-equilibrium plasma kinetics. After verification and validationof the framework, it was used to study two physical phenomena: plasma-assisted combustion andthe recently proposed optically-pumped rare gas metastable laser.Global models permeate chemistry and plasma science, relying on spatial averaging to focusattention on the dynamics of reaction networks. Defined by a set of species continuity and energyconservation equations, the required data and constructed systems are conceptually similar acrossmost applications, providing a light platform for exploratory and result-search parameter scan-ning. Unfortunately, it is common practice for custom code to be developed for each application-an enormous duplication of effort which negatively affects the quality of the software produced.Presented herein, the Python-based Kinetic Global Modeling framework (KGMf) was designed tosupport all modeling phases: collection and analysis of reaction data, construction of an exportablesystem of model ODEs, and a platform for interactive evaluation and post-processing analysis. Asymbolic ODE system is constructed for interactive manipulation and generation of a Jacobian,both of which are compiled as operation-optimized C-code.Plasma-assisted combustion and ignition (PAC/PAI) embody the modernization of burning fuelby opening up new avenues of control and optimization. With applications ranging from engineefficiency and pollution control to stabilized operation of scramjet technology in hypersonic flows,developing an understanding of the underlying plasma chemistry is of the utmost importance.While the use of equilibrium (thermal) plasmas in the combustion process extends back to the ad-vent of the spark-ignition engine, works from the last few decades have demonstrated fundamentaldifferences between PAC and classical combustion theory. The KGMf is applied to nanosecond-discharge systems in order to analyze the effects of electron energy distribution assumptions onreaction kinetics and highlight the usefulness of 0D modeling in systems defined by coupled andcomplex physics.With fundamentally different principles involved, the concept of optically-pumped rare gasmetastable lasing (RGL) presents a novel opportunity for scalable high-powered lasers by takingadvantage of similarities in the electronic structure of elements while traversing the periodic ta-ble. Building from the proven concept of diode-pumped alkali vapor lasers (DPAL), RGL systemsdemonstrate remarkably similar spectral characteristics without problems associated with heatedcaustic vapors. First introduced in 2012, numerical studies on the latent kinetics remain immature.This work couples an analytic model developed for DPAL with KGMf plasma chemistry to bet-ter understand the interaction of a non-equilibrium plasma with the induced laser processes anddetermine if optical pumping could be avoided through careful discharge selection.
Show less
- Title
- Data-driven and task-specific scoring functions for predicting ligand binding poses and affinity and for screening enrichment
- Creator
- Ashtawy, Hossam Mohamed Farg
- Date
- 2017
- Collection
- Electronic Theses & Dissertations
- Description
-
Molecular modeling has become an essential tool to assist in early stages of drug discovery and development. Molecular docking, scoring, and virtual screening are three such modeling tasks of particular importance in computer-aided drug discovery. They are used to computationally simulate the interaction between small drug-like molecules, known as ligands, and a target protein whose activity is to be altered. Scoring functions (SF) are typically employed to predict the binding conformation ...
Show moreMolecular modeling has become an essential tool to assist in early stages of drug discovery and development. Molecular docking, scoring, and virtual screening are three such modeling tasks of particular importance in computer-aided drug discovery. They are used to computationally simulate the interaction between small drug-like molecules, known as ligands, and a target protein whose activity is to be altered. Scoring functions (SF) are typically employed to predict the binding conformation (docking task), binary activity label (screening task), and binding affinity (scoring task) of ligands against a critical protein in the disease's pathway. In most molecular docking software packages available today, a generic binding affinity-based (BA-based) SF is invoked for the three tasks to solve three different, but related, prediction problems. The vast majority of these predictive models are knowledge-based, empirical, or force-field scoring functions. The fourth family of SFs that has gained popularity recently and showed potential of improved accuracy is based on machine-learning (ML) approaches. Despite intense efforts in developing conventional and current ML SFs, their limited predictive accuracies in these three tasks have been a major roadblock toward cost-effective drug discovery. Therefore, in this work we present (i) novel task- specific and multi-task SFs employing large ensembles of deep neural networks (NN) and other state-of-the-art ML algorithms in conjunction with (ii) data-driven multi-perspective descriptors (features) for accurate characterization of protein-ligand complexes (PLCs) extracted using our Descriptor Data Bank (DDB) platform.We assess the docking, screening, scoring, and ranking accuracies of the proposed task-specific SFs with DDB descriptors as well as several conventional approaches in the context of the 2007 and 2014 PDBbind benchmark that encompasses a diverse set of high-quality PLCs. Our approaches substantially outperform conventional SFs based on BA and single-perspective descriptors in all tests. In terms of scoring accuracy, we find that the ensemble NN SFs, BsN-Score and BgN-Score, have more than 34% better correlation (0.844 and 0.840 vs. 0.627) between predicted and measured BAs compared to that achieved by X-Score, a top performing conventional SF. We further find that ensemble NN models surpass SFs based on other state-of-the-art ML algorithms. Similar results have been obtained for the ranking task. Within clusters of PLCs with different ligands bound to the same target protein, we find that the best ensemble NN SF is able to rank the ligands correctly 64.6% of the time compared to 57.8% obtained by X-Score. A substantial improvement in the docking task has also been achieved by our proposed docking-specific SFs. We find that the docking NN SF, BsN-Dock, has a success rate of 95% in identifying poses that are within 2 Å RMSD from the native poses of 65 different protein families. This is in comparison to a success rate of only 82% achieved by the best conventional SF, ChemPLP, employed in the commercial docking software GOLD. As for the ability to distinguish active molecules from inactives, our screening-specific SFs showed excellent improvements over the conventional approaches. The proposed SF BsN-Screen achieved a screening enrichment factor of 33.90 as opposed to 19.54 obtained from the best conventional SF, GlideScore, employed in the docking software Glide. For all tasks, we observed that the proposed task-specific SFs benefit more than their conventional counterparts from increases in the number of descriptors and training PLCs. They also perform better on novel proteins that they were never trained on before. In addition to the three task-specific SFs, we propose a novel multi-task deep neural network (MT-Net) that is trained on data from three tasks to simultaneously predict binding poses, affinities, and activity labels. MT-Net is composed of shared hidden layers for the three tasks to learn common features, task-specific hidden layers for higher feature representation, and three outputs for the three tasks. We show that the performance of MT-Net is superior to conventional SFs and competitive with other ML approaches. Based on current results and potential improvements, we believe our proposed ideas will have a transformative impact on the accuracy and outcomes of molecular docking and virtual screening.
Show less