You are here
Search results
(1 - 10 of 10)
- Title
- CHANGE OF SOIL MICRO-ENVIRONMENTS DURING PLANT DECOMPOSITION AND ITS EFFECT ON CARBON AND NITROGEN DYNAMICS
- Creator
- Kim, Kyungmin
- Date
- 2021
- Collection
- Electronic Theses & Dissertations
- Description
-
Detritusphere is one of the most important hotspots; it consists of soil surrounding dead organic materials and is affected by their decomposition and recycling. When assessing C and N dynamics associated with decomposition, micro-environments created near decomposing residues should be considered because the microbial processes are affected by the condition of detritusphere micro-environments, not that of the bulk soil. This could possibly overcome current inaccuracy of global greenhouse gas...
Show moreDetritusphere is one of the most important hotspots; it consists of soil surrounding dead organic materials and is affected by their decomposition and recycling. When assessing C and N dynamics associated with decomposition, micro-environments created near decomposing residues should be considered because the microbial processes are affected by the condition of detritusphere micro-environments, not that of the bulk soil. This could possibly overcome current inaccuracy of global greenhouse gas emission and biogeochemical cycle models. The goal of my Ph.D. research was to investigate the change of soil micro-environmental conditions within detritusphere during plant residue decomposition and to understand their role in and interactions with C and N transformation and dynamics. In Chapter 1, I evaluated the water absorption by decomposing plant roots, based on the finding of water absorption by leaves (a.k.a. sponge effect). In addition to this ‘sponge effect’ in root residues, I assessed the soil moisture gradient created by it by using micro-computed tomography. The study found that the moisture redistribution near decomposing roots depends on the initial soil moisture content and the pore characteristics nearby the roots. It also suggested that the anaerobic micro-environment formed near the roots might influence the N2O emission in the early stage of the decomposition process. In chapter 2, I hypothesized that the influence of moisture redistribution on the N2O emission found in previous chapter is mediated by reduced O2 availability near plant residues. I measured O2 and N2O concentrations in the pores adjacent to leaf and root residues by using electrochemical microsensors. The leaf residues had lower O2 availability near them due to greater water absorption and microbial O2 consumption. Both N2O production and emission were negatively correlated to O2 availability, supporting the initial hypothesis. In Chapter 3, I investigated the fate of C and N during the decomposition of switchgrass roots grown in contrasting soil pores, to test if the micro-environmental characteristics described in former chapters have significant influence on decomposition dynamics. Comprehensive assessments of CO2 and N2O emissions, priming effects, and C and N remaining in soil were performed using dual-isotope labeling (13C and 15N) techniques. There were enhanced influences of soil pore sizes on plant-driven N2O emission, N2O priming, and enzyme activity in in-situ grown root systems. The study also confirmed that detritusphere micro-environments formed in large-pore soils are more favorable for microbial activity and denitrification processes. My dissertation contributed to the characterization of micro-environmental conditions in detritusphere, and their relevance to C and N cycling. It stresses the importance of hotspot micro-environments in predicting greenhouse gas emission and related microbial processes and urges further research to understand the full mechanism and incorporate those in greenhouse gas prediction models.
Show less
- Title
- Constraining nuclear weak interactions in astrophysics and new many-core algorithms for neuroevolution
- Creator
- Sullivan, Christopher James
- Date
- 2018
- Collection
- Electronic Theses & Dissertations
- Description
-
"Weak interactions involving atomic nuclei are critical components in a broad range of astrophysical phenomenon. As allowed Gamow-Teller transitions are the primary path through which weak interactions in nuclei operate in astrophysical contexts, the constraint of these nuclear transitions is an important goal of nuclear astrophysics. In this work, the charged current nuclear weak interaction known as electron capture is studied in the context of stellar core-collapse supernovae (CCSNe)....
Show more"Weak interactions involving atomic nuclei are critical components in a broad range of astrophysical phenomenon. As allowed Gamow-Teller transitions are the primary path through which weak interactions in nuclei operate in astrophysical contexts, the constraint of these nuclear transitions is an important goal of nuclear astrophysics. In this work, the charged current nuclear weak interaction known as electron capture is studied in the context of stellar core-collapse supernovae (CCSNe). Specifically, the sensitivity of the core-collapse and early post-bounce phases of CCSNe to nuclear electron capture rates are examined. Electron capture rates are adjusted by factors consistent with uncertainties indicated by comparing theoretical rates to those deduced from charge-exchange and beta-decay measurements. With the aide of such sensitivity studies, the diverse role of electron capture on thousands of nuclear species is constrained to a few tens of nuclei near N 503030 and A 803030 which dictate the primary response of CCSNe to nuclear electron capture. As electron capture is shown to be a leading order uncertainty during the core-collapse phase of CCSNe, future experimental and theoretical efforts should seek to constrain the rates of nuclei in this region. Furthermore, neutral current neutrino-nuclear interactions in the tens-of-MeV energy range are important in a variety of astrophysical environments including core-collapse supernovae as well as in the synthesis of some of the solar systems rarest elements. Estimates for inelastic neutrino scattering on nuclei are also important for neutrino detector construction aimed at the detection of astrophysical neutrinos. Due to the small cross sections involved, direct measurements are rare and have only been performed on a few nuclei. For this reason, indirect measurements provide a unique opportunity to constrain the nuclear transition strength needed to infer inelastic neutrino-nucleus cross sections. Herein the (6Li, 6Li0 ) inelastic scattering reaction at 100 MeV/u is shown to indirectly select the relevant transitions for inelastic neutrino-nucleus scattering. Specifically, the probes unique selectivity of isovector-spin transfer excitations (delta-S = 1, delta-T = 1, delta-Tz = 0) is demonstrated, thereby allowing the extraction of Gamow-Teller transition strength in the inelastic channel. Finally, the development and performance of a newly established technique for the subfield of artificial intelligence known as neuroevolution is described. While separate from the physics that is discussed, these algorithmic advancements seek to improve the adoption of machine learning in the scientific domain by enabling neuroevolution to take advantage of modern heterogeneous compute architectures. Because the evolution of neural network populations offloads the choice of specific details about the neural networks to an evolutionary search algorithm, neuroevolution can increase the accessibility of machine learning. However, the evolution of neural networks through parameter and structural space presents a novel divergence problem when mapping the evaluation of these networks to many-core architectures. The principal focus of the algorithm optimizations described herein are on improving the feed-forward evaluation time when tens-to-hundreds of thousands of heterogeneous neural networks are evaluated concurrently."--Pages ii-iii.
Show less
- Title
- The integration of computational methods and nonlinear multiphoton multimodal microscopy imaging for the analysis of unstained human and animal tissues
- Creator
- Murashova, Gabrielle Alyse
- Date
- 2019
- Collection
- Electronic Theses & Dissertations
- Description
-
Nonlinear multiphoton multimodal microscopy (NMMM) used in biological imaging is a technique that explores the combinatorial use of different multiphoton signals, or modalities, to achieve contrast in stained and unstained biological tissues. NMMM is a nonlinear laser-matter interaction (LMI), which utilizes multiple photons at once (multiphoton processes, MP). The statistical probability of multiple photons arriving at a focal point at the same time is dependent on the two-photon absorption ...
Show moreNonlinear multiphoton multimodal microscopy (NMMM) used in biological imaging is a technique that explores the combinatorial use of different multiphoton signals, or modalities, to achieve contrast in stained and unstained biological tissues. NMMM is a nonlinear laser-matter interaction (LMI), which utilizes multiple photons at once (multiphoton processes, MP). The statistical probability of multiple photons arriving at a focal point at the same time is dependent on the two-photon absorption (TPA) cross-section of the molecule being studied and is incredibly difficult to satisfy using typical incoherent light, say from a light bulb. Therefore, the stimulated emission of coherent photons by pulsed lasers are used for NMMM applications in biomedical imaging and diagnostics.In this dissertation, I hypothesized that due to the near-IR wavelength of the Ytterbium(Yb)-fiber laser (1070 nm), the four MP-two-photon excited fluorescence (2PEF), second harmonic generation (SHG), three-photon excited fluorescence (3PEF) and third harmonic generation (THG), generated by focusing this ultrafast laser, will provide contrast to unstained tissues sufficient for augmenting current histological staining methods used in disease diagnostics. Additionally, I hypothesized that these NMMM images (NMMMIs) can benefit from computational methods to accurately separate their overlapping endogenous MP signals, as well as train a neural network for image classification to detect neoplastic, inflammatory, and healthy regions in the human oral mucosa. Chapter II of this dissertation explores the use of NMMM to study the effects of storage on donated red blood cells (RBCs) using non-invasive 2PEF and THG without breaching the blood storage bag. Unlike the lack of RBC fluorescence previously reported, we show that with two-photon (2P) excitation from an 800 nm source, and three-photon (3P) excitation from a 1060 nm source, there was sufficient fluorescent signal from hemoglobin as well as other endogenous fluorophores. Chapter III employs NMMM to establish the endogenous MP signals present in healthy excised and unstained mouse and Cynomolgus monkey retinas using 2PEF, 3PEF, SHG, and THG. We show the first epi-direction detected cross-section and depth-resolved images of unstained isolated retinas obtained using NMMM with an ultrafast fiber laser centered at 1070 nm and a 303038 fs pulse. Two spectrally and temporally distinct regions were shown; one from the nerve fiber layer (NFL) to the inner receptor layer (IRL), and one from the retinal pigmented epithelium (RPE) and choroid. Chapter IV focuses on the use of minimal NMMM signals from a 1070 nm Yb-fiber laser to match and augment H&E-like contrast in human oral squamous cell carcinoma (OSCC) biopsies. In addition to performing depth-resolved (DR) imaging directly from the paraffin block and matching H&E-like contrast, we showed how the combination of characteristic inflammatory 2PEF signals undetectable in H&E stained tissues and SHG signals from stromal collagen can be used to analytical distinguish healthy, mild and severe inflammatory, and neoplastic regions and determine neoplastic margins in a three-dimensional (3D) manner. Chapter V focuses on the use of computational methods to solve an inverse problem of the overlapping endogenous fluorescent and harmonic signals within mouse retinas. The least-squares fitting algorithm was most effective at accurately assigning photons from the NMMMIs to their source. This work, unlike commercial software, permits using custom signal source reference spectra from endogenous molecules, not from fluorescent tags and stains. Finally, Chapter VI explores the use of the OSCC images to train a neural network image classifier to achieve the overall goal of classifying the NMMMIs into three categories-healthy, inflammatory, and neoplastic. This work determined that even with a small dataset (< 215 images), the features present in NMMMIs in combination with tiling, transfer learning can train an image classifier to classify healthy, inflammatory, and neoplastic OSCC regions with 70% accuracy.My research successfully shows the potential of using NMMM in tandem with computational methods to augment current diagnostic protocols used by the health care system with the potential to improve patient outcomes as well as decrease pathology departmental costs. These results should facilitate the continued study and development of NMMM so that in the future, NMMM can be used for clinical applications.
Show less
- Title
- Quantitative methods for calibrated spatial measurements of laryngeal phonatory mechanisms
- Creator
- Ghasemzadeh, Hamzeh
- Date
- 2020
- Collection
- Electronic Theses & Dissertations
- Description
-
The ability to perform measurements is an important cornerstone and the prerequisite of any quantitative research. Measurements allow us to quantify inputs and outputs of a system, and then to express their relationships using concise mathematical expressions and models. Those models would then enable us to understand how a target system works and to predict its output for changes in the system parameters. Conversely, models would enable us to determine the proper parameters of a system for...
Show moreThe ability to perform measurements is an important cornerstone and the prerequisite of any quantitative research. Measurements allow us to quantify inputs and outputs of a system, and then to express their relationships using concise mathematical expressions and models. Those models would then enable us to understand how a target system works and to predict its output for changes in the system parameters. Conversely, models would enable us to determine the proper parameters of a system for achieving a certain output. Putting these in the context of voice science research, variations in the parameters of the phonatory system could be attributed to individual differences. Thus, accurate models would enable us to account for individual differences during the diagnosis and to make reliable predictions about the likely outcome of different treatment options. Analysis of vibration of the vocal folds using high-speed videoendoscopy (HSV) could be an ideal candidate for constructing computational models. However, conventional images are not spatially calibrated and cannot be used for absolute spatial measurements. This dissertation is focused on developing the required methodologies for calibrated spatial measurements from in-vivo HSV recordings. Specifically, two different approaches for calibrated horizontal measurements of HSV images are presented. The first approach is called the indirect approach, and it is based on the registration of a specific attribute of a common object (e.g. size of a lesion) from a calibrated intraoperative still image to its corresponding non-calibrated in-vivo HSV recording. This approach does not require specialized instruments and can be implemented in many clinical settings. However, its validity depends on a couple of assumptions. Violation of those assumptions could lead to significant measurement errors. The second approach is called the direct approach, and it is based on a laser-projection flexible fiberoptic endoscope. This approach would enable us to make accurate calibrated spatial measurements. This dissertation evaluates the accuracy of the first approach indirectly, and by studying its underlying fundamental assumptions. However, the accuracy of the second approach is evaluated directly, and using benchtop experiments with different surfaces, different working distances, and different imaging angles. The main significances and contributions of this dissertation are the following: (1) a formal treatment of indirect horizontal calibration is presented, and the assumptions governing its validity and reliability are discussed. A battery of tests is presented that can indirectly assess the validity of those assumptions in laryngeal imaging applications; (2) recordings from pre- and post-surgery from patients with vocal fold mass lesions are used as a testbench for the developed indirect calibration approach. In that regard, a full solution is developed for measuring the calibrated velocity of the vocal folds. The developed solution is then used to investigate post-surgery changes in the closing velocity of the vocal folds from patients with vocal fold mass lesions; (3) the method for calibrated vertical measurement from a laser-projection fiberoptic flexible endoscope is developed. The developed method is evaluated at different working distances, different imaging angles, and on a 3D surface; (4) a detailed analysis and investigation of non-linear image distortion of a fiberoptic flexible endoscope is presented. The effect of imaging angle and spatial location of an object on the magnitude of that distortion is studied and quantified; (5) the method for calibrated horizontal measurement from a laser-projection fiberoptic flexible endoscope is developed. The developed method is evaluated at different working distances, different imaging angles, and on a 3D surface.
Show less
- Title
- THREE-DIMENSIONAL MULTI-PHYSICS MODELING METHODOLOGY TO STUDY ENGINE CYLINDER-KIT ASSEMBLY TRIBOLOGY AND DESIGN CONSIDERATIONS
- Creator
- Chowdhury, Sadiyah Sabah
- Date
- 2021
- Collection
- Electronic Theses & Dissertations
- Description
-
Engine cylinder-kit tribology is pivotal to durability, emission management, friction, oil consumption, and efficiency of the internal combustion engine. The piston ring pack dynamics and the flow dynamics are critical to engine cylinder-kit tribology and design considerations. A three-dimensional (3D), multi-physics methodology is developed to investigate the liquid oil- combustion gas transport and oil evaporation mechanisms inside the whole domain of the cylinder kit assembly during the...
Show moreEngine cylinder-kit tribology is pivotal to durability, emission management, friction, oil consumption, and efficiency of the internal combustion engine. The piston ring pack dynamics and the flow dynamics are critical to engine cylinder-kit tribology and design considerations. A three-dimensional (3D), multi-physics methodology is developed to investigate the liquid oil- combustion gas transport and oil evaporation mechanisms inside the whole domain of the cylinder kit assembly during the four-stroke cycle using multiple simulation tools and high-performance computing. First, a CASE (Cylinder-kit Analysis System for Engines) 1D model is developed to provide necessary boundary conditions for the subsequent steps of the chain of simulations. Next, the ring-bore and ring groove conformability along with the twist angle variation across the circumference are investigated by modeling a twisted ring via a 3D ring FEA contact model. The ring twist induces change in ring location which subsequently changes the cylinder kit geometry dynamically across the cycle. The dynamically varying geometries are generated using the LINCC (Linking CASE to CFD) program. Finally, a three-dimensional multiphase flow model is developed for the dynamic geometries across the cycle using CONVERGE. The methodology is first applied on a small-bore (50 mm) engine running at 2000 rpm. Next, a CASE 1-D model is developed and calibrated via HEEDS across a range of load-speed operating conditions of a Cummins 6-cylinder, 137.02 mm bore, Acadia engine. The 1800 RPM, full load condition with a positively twisted second ring is selected for the experimental validation of the 3-D methodology. A study of the second ring dynamics in the small-bore engine showed the effect of negative ring twist on the three-dimensional fluid flow physics. The oil (liquid oil and oil vapor) transport and combustion gas flow processes through the piston ring pack for the twisted and untwisted geometry configurations are compared. A comparison with the untwisted geometry for this cylinder-kit shows that the negatively twisted second ring resulted in a higher blowby but lower reverse blowby and oil consumption. The comparison of the model predicted oil consumption with existing literature shows that oil consumption is within the reasonable range for typical engines. The blowby, second land pressures and third land pressures comparison with the experimental results of Cummins Acadia engine showed considerable agreement. The reverse blowby and oil consumption along with the liquid oil and oil vapor mass fraction distribution pattern across the cycle are also analyzed. In the later section of this work surface texture characterization of a novel Abradable Powder Coating (APC) and stock piston skirt coatings of a Cummins 2.8 L Turbo engine is conducted. The surface texture and characteristic properties varying across the piston skirt are obtained and analyzed via a 3D optical profiler and OmniSurf3D software. The engine operating conditions are found through a combination of measurements, testing, and a calibrated GT-Power model. The variable surface properties along with other geometric, thermodynamic, material properties are utilized to build a model in CASE for both APC and stock coated pistons. The Surface texture analysis shows that the APC coating has a unique feature of mushroom cap-like surface and deeper valleys that could potentially be beneficial for lubrication and oil retention. Comparison of different performance parameters from CASE simulation results shows that APC has the potential to be a suitable candidate for piston skirt coating.
Show less
- Title
- OPTIMIZATION OF LARGE SCALE ITERATIVE EIGENSOLVERS
- Creator
- Afibuzzaman, Md
- Date
- 2021
- Collection
- Electronic Theses & Dissertations
- Description
-
Sparse matrix computations, in the form of solvers for systems of linear equations, eigenvalue problem or matrix factorizations constitute the main kernel in problems from fields as diverse as computational fluid dynamics, quantum many body problems, machine learning and graph analytics. Iterative eigensolvers have been preferred over the regular method because the regular method not being feasible with industrial sized matrices. Although dense linear algebra libraries like BLAS, LAPACK,...
Show moreSparse matrix computations, in the form of solvers for systems of linear equations, eigenvalue problem or matrix factorizations constitute the main kernel in problems from fields as diverse as computational fluid dynamics, quantum many body problems, machine learning and graph analytics. Iterative eigensolvers have been preferred over the regular method because the regular method not being feasible with industrial sized matrices. Although dense linear algebra libraries like BLAS, LAPACK, SCALAPACK are well established and some vendor optimized implementation like mkl from Intel or Cray Libsci exist, it is not the same case for sparse linear algebra which is lagging far behind. The main reason behind slow progress in the standardization of sparse linear algebra or library development is the different forms and properties depending on the application area. It is worsened for deep memory hierarchies of modern architectures due to low arithmetic intensities and memory bound computations. Minimization of data movement and fast access to the matrix are critical in this case. Since the current technology is driven by deep memory architectures where we get the increased capacity at the expense of increased latency and decreased bandwidth when we go further from the processors. The key to achieve high performance in sparse matrix computations in deep memory hierarchy is to minimize data movement across layers of the memory and overlap data movement with computations. My thesis work contributes towards addressing the algorithmic challenges and developing a computational infrastructure to achieve high performance in scientific applications for both shared memory and distributed memory architectures. For this purpose, I started working on optimizing a blocked eigensolver and optimized specific computational kernels which uses a new storage format. Using this optimization as a building block, we introduce a shared memory task parallel framework focusing on optimizing the entire solvers rather than a specific kernel. Before extending this shared memory implementation to a distributed memory architecture, I simulated the communication pattern and overheads of a large scale distributed memory application and then I introduce the communication tasks in the framework to overlap communication and computation. Additionally, I also tried to find a custom scheduler for the tasks using a graph partitioner. To get acquainted with high performance computing and parallel libraries, I started my PhD journey with optimizing a DFT code named Sky3D where I used dense matrix libraries. Despite there might not be any single solution for this problem, I tried to find an optimized solution. Though the large distributed memory application MFDn is kind of the driver project of the thesis, but the framework we developed is not confined to MFDn only, rather it can be used for other scientific applications too. The output of this thesis is the task parallel HPC infrastructure that we envisioned for both shared and distributed memory architectures.
Show less
- Title
- Validation and application of experimental framework for the study of vocal fatigue
- Creator
- Berardi, Mark Leslie
- Date
- 2020
- Collection
- Electronic Theses & Dissertations
- Description
-
In recent years, vocal fatigue research has been increasingly studied particularly with application to the reduction of its impact on schoolteachers and other occupational voice users. However, the concept of vocal fatigue is complex and neither well defined or well understood. Vocal fatigue seems to be highly individualized and dependent on several underlying factors or concepts. The purpose of this dissertation is to propose and support through experimentation a framework that can identify...
Show moreIn recent years, vocal fatigue research has been increasingly studied particularly with application to the reduction of its impact on schoolteachers and other occupational voice users. However, the concept of vocal fatigue is complex and neither well defined or well understood. Vocal fatigue seems to be highly individualized and dependent on several underlying factors or concepts. The purpose of this dissertation is to propose and support through experimentation a framework that can identify the factors contributing to vocal fatigue. The main hypothesis is that the change in vocal effort, vocal performance, and/or their interaction through a vocal demand (load) will implicate vocal fatigue. To test this hypothesis, three primary research questions and experiments were developed. For all three experiments vocal effort was rated using the Borg CR-100 scale and vocal performance was evaluated with five speech acoustic parameters (fundamental frequency mean and standard deviation, speech level mean and standard deviation, and smoothed cepstral peak prominence).The first research question tests whether perceived vocal effort can be measured reliably and if so, how vocal performance in terms of vocal intensity changes with a vocal effort goal. Participants performed various speech tasks at cued effort levels from the Borg CR-100 scale. Speech acoustic parameters were calculated and compared across the specific vocal effort levels. Additionally, the test-retest reliability across the effort levels for speech level was measured. Building from that experiment, the second research question was to what degree are vocal performance and vocal effort related given talker exposure to three equivalent vocal load levels. This experiment had participants performing speech tasks when presented with three different equivalent vocal load scenarios (communication distance, loudness goal, and background noise); for a given load scenario, participants rated their vocal effort associated with these tasks. Vocal effort ratings and measures of vocal performance were compared across the vocal load levels. The last research question built on the previous two and asked to what degree do vocal performance, vocal effort, and/or their interaction change given a vocal load of excess background noise (noise load) over a prolonged speaking task (temporal load). To test this, participants described routes on maps for thirty minutes in the presence of loud (75 dBA) background noise. Vocal effort ratings and measures of vocal performance were compared throughout the vocal loading task.The results indicate that elicited vocal effort levels from the BORG CR-100 scale are distinct in vocal performance and reliable across the participants. Additionally, a relationship between changes in vocal effort and vocal performance across the various vocal load levels was quantified. Finally, these findings support the individual nature of the complex relationship between vocal fatigue, vocal effort, and vocal performance due to vocal loads (via cluster and subgroup analysis); the theoretical framework captures this complexity and provides insights into these relationships. Future vocal fatigue research should benefit from using the framework as an underlying model of these relationships.
Show less
- Title
- The role of hemodynamics on intraluminal thrombus accumulation and abdominal aortic aneurysm expansion : a longitudinal patient specific study
- Creator
- Zambrano, Byron A.
- Date
- 2017
- Collection
- Electronic Theses & Dissertations
- Description
-
Abdominal aortic aneurysm (AAA), the ongoing growth of the abdominal aorta at the abdominal level is a cardiovascular disease that affects a large part of the elderly population. Among factors affecting the AAA disease, hemodynamic forces and intraluminal thrombus (ILT) are suggested to play important roles. Despite the effort made to understand these roles, much remain to be learned. This suggests a need to better understand relationships among the three factors: hemodynamics, ILT...
Show moreAbdominal aortic aneurysm (AAA), the ongoing growth of the abdominal aorta at the abdominal level is a cardiovascular disease that affects a large part of the elderly population. Among factors affecting the AAA disease, hemodynamic forces and intraluminal thrombus (ILT) are suggested to play important roles. Despite the effort made to understand these roles, much remain to be learned. This suggests a need to better understand relationships among the three factors: hemodynamics, ILT accumulation, and AAA expansion. Specially using patient-specific information of AAA patients at different times throughout the progression of the disease. Hence, this study used 59 computer tomography (CT) scans from longitudinal studies of 14 different AAA patients to analyzed the relationship between them. Various hemodynamic variables were obtained from performing computational fluid dynamics (CFD) and Lagrangian particle method on patient-specific lumen volumes of each AAA at each scan; ILT accumulation was estimated by mapping changes of ILT thickness (ILT) between two consecutives AAA scans, and ILT accumulation and AAA expansion rates were estimated from changes in ILT and AAA volume, respectively. Ultimately, the relationship between local values of hemodynamic parameters and ILT was tested on each scan of each patient using Pearson correlation coefficients. Results showed that, while low WSS was observed at regions where ILT accumulated, the rate at which ILT accumulated occurred at the same rate as the aneurysm expansion rate (Rsq=0.738;). Comparison between AAAs with and without thrombus showed that aneurysm with ILT recorded lower values of WSS and higher values of AAA expansion than those without thrombus. In fact, correlation analysis showed that among all local hemodynamic parameter tested, WSS showed to be inversely correlated to ILT in approximately half of the scans from all AAA tested (52.5% of n number of scans; n=40). Vortical structures were also studied in all AAAs (with and without ILT). Results from this analysis showed that in aneurysms that developed thick ILTs, vortices consistently dissipated near zones of positive ILT during the diastolic phase. In these AAAs, level of activation of platelets exposed to these vortices were estimated and results showed that none of the highest activation level recorded in any AAAs tested reached or exceeded the proposed activation threshold. These finding suggest that while vortical structures might be important in convecting and concentrating platelets and other main coagulation species (e.g. Thrombin) on regions where positive ILT is observed, these vortices might not be responsible of activating platelets. Findings also suggest, that regardless of the platelet activation pathway, low values of WSS might be promoting the formation of thrombus and submits the idea that by increasing WSS levels ILT accumulation may be prevented.
Show less
- Title
- CONTRIBUTION OF SOIL PORES TO THE PROCESSING AND PROTECTION OF SOIL CARBON AT MICRO-SCALE
- Creator
- Quigley, Michelle Yvonne
- Date
- 2019
- Collection
- Electronic Theses & Dissertations
- Description
-
Soil carbon has the potential to increase crop yield and mitigate climate change. As the largest terrestrial carbon stock, gains and losses of soil carbon can have a great impact on atmospheric CO2 concentrations. Additionally, many beneficial soil properties for agricultural sustainability are tied to soil carbon. This makes understanding the mechanics of soil carbon vital to accurate climate change modeling and management recommendations. However, current soil carbon models, relying on bulk...
Show moreSoil carbon has the potential to increase crop yield and mitigate climate change. As the largest terrestrial carbon stock, gains and losses of soil carbon can have a great impact on atmospheric CO2 concentrations. Additionally, many beneficial soil properties for agricultural sustainability are tied to soil carbon. This makes understanding the mechanics of soil carbon vital to accurate climate change modeling and management recommendations. However, current soil carbon models, relying on bulk characteristics, can vary widely in their results and current recommendations for improving soil carbon do not work in all circumstances. Micro-scale processes, the scale at which carbon protection occurs, are currently not well understood. Improving the understanding of micro-scale processes would improve both climate models and management recommendations.Carbon processes at micro-scale are believed to occur in diverse microenvironments. However, it is soil pores that, through transport of gasses, water, nutrients and microorganisms, may ultimately control the formation of these microenvironments. Therefore, understanding the relationship between soil pores and carbon is potentially vital to understanding micro-scale carbon processes. To understand the relationship between soil pores and carbon I employed computed microtomography (μCT) to obtain pore information and stable carbon isotopes to track carbon. I investigated the spatial variability of soil carbon within the soil matrix of different soil managements and how pores of different origin contributed to this variability to explore the effect of management and pore origin on the creation of microenvironments. Then I investigated the effect of pore size distribution on carbon addition during growth of cereal rye (Secale cereale L.) and usage during a subsequent incubation using natural abundance stable carbon isotopes. I investigated the role of management history on the effect of pore size distribution during new carbon addition and usage using enriched stable carbon isotopes.I found managements that build carbon have higher spatial variability of grayscale values in μCT images than managements that lose carbon. This variability is related to the amount of biological pores, due to their larger range of influence as compared to mechanical pores (123 μm vs. 30 μm), which would impact variability greater. The influence of biological and mechanical pores on adjacent carbon concentrations was found to be independent of management. Pores of 15-40 μm range were associated with carbon protection after incubation, matching previously reported results, indicating a universal mechanism for carbon protection, possibly related to fungi, in these pores. From both natural abundance and enriched stable carbon isotope studies, I found that 40-90 μm pores are associated with large gains of new carbon during rye growth, but large losses of new carbon in the subsequent incubations.I found important relationships between pore origin, pore sizes, and carbon, specifically, that biological pores exert more influence on the carbon concentrations adjacent to them than mechanical pores. A technique to measure this influence using osmium staining of organic matter and grayscale gradients of images was developed. I found that 40-90 μm pores are important avenues of carbon addition, but also are associated with carbon losses. However, the reasons for these easy gains and losses is yet unclear, requiring further research, but it is believed to be associated with small plant roots.
Show less
- Title
- Harnessing the power of graphics processing units to accelerate computational chemistry
- Creator
- Miao, Yipu
- Date
- 2015
- Collection
- Electronic Theses & Dissertations
- Description
-
Electron Repulsion Integral (ERI) and its derivative evaluation is the limiting factor for self-consistent-field (SCF) and Density Functional Theory (DFT) calculations. Therefore, calculation of these quantities on graphical processing Units (GPUs) can significantly accelerate quantum chemical calculations. Recurrence relations, one of the fastest ERI evaluation algorithms currently available, are used to compute ERIs. A direct-SCF scheme to assemble the Fock matrix and gradient efficiently...
Show moreElectron Repulsion Integral (ERI) and its derivative evaluation is the limiting factor for self-consistent-field (SCF) and Density Functional Theory (DFT) calculations. Therefore, calculation of these quantities on graphical processing Units (GPUs) can significantly accelerate quantum chemical calculations. Recurrence relations, one of the fastest ERI evaluation algorithms currently available, are used to compute ERIs. A direct-SCF scheme to assemble the Fock matrix and gradient efficiently is presented, wherein ERIs are evaluated on-the-fly to avoid CPU-GPU data transfer, a well known architectural bottleneck in GPU specific computation. A machine-generated code is utilized to calculate different ERI types efficiently. However, only s, p and d ERIs and s, p derivatives can be executed on GPUs using the current version of CUDA and NVidia GPUs. Hence, we developed an algorithm to compute f type ERIs and d type ERI derivatives on GPUs. Our benchmarks shows the performance GPU enable ERI and ERI derivative computation yielded speedups of 10~100 times relative to traditional CPU execution. An accuracy analysis using double-precision calculations demonstrates the accuracy is satisfactory for most applications. Besides ab inito quantum chemistry methods, GPU programming can be applied to a number of computational chemistry applications, for example, The Weighted Histogram Analysis Method (WHAM), a technique to compute potentials of mean force. We present an implementation of multidimensional WHAM on Graphical Processing Units (GPUs), which significantly accelerates its computational performance. Our test cases, that simulate two-dimensional free energy surfaces, yielded speedups up to 1000 times in double precision. Moreover, speedups of 2100 times can be achieved when single precision is used whose use introduces errors of less than 0.2 kcal/mol. These applications of GPU computing in computational chemistry can significantly benefit the whole computational chemistry community.
Show less