You are here
Search results
(1 - 7 of 7)
- Title
- Reprogramming to the nervous system : a computational and candidate gene approach
- Creator
- Alicea, Bradly John
- Date
- 2013
- Collection
- Electronic Theses & Dissertations
- Description
-
The creation of stem-like cells, neuronal cells, and skeletal muscle fibers from a generic somatic precursor phenotype has many potential applications. These uses range from cell therapy to disease modeling. The enabling methodology for these applications is known as direct cellular reprogramming. While the biological underpinnings of cellular reprogramming go back to the work of Gurdon and other developmental biologists, the direct approach is a rather recent development. Therefore, our...
Show moreThe creation of stem-like cells, neuronal cells, and skeletal muscle fibers from a generic somatic precursor phenotype has many potential applications. These uses range from cell therapy to disease modeling. The enabling methodology for these applications is known as direct cellular reprogramming. While the biological underpinnings of cellular reprogramming go back to the work of Gurdon and other developmental biologists, the direct approach is a rather recent development. Therefore, our understanding of the reprogramming process is largely based on isolated findings and interesting results. A true synthesis, particularly from a systems perspective, is lacking. In this dissertation, I will attempt to build toward an intellectual synthesis of direct reprogramming by critically examining four types of phenotypic conversion that result in production of nervous system components: induced pluripotency (iPS), induced neuronal (iN), induced skeletal muscle (iSM), and induced cardiomyocyte (iCM). Since potential applications range from tools for basic science to disease modeling and bionic technologies, the need for a common context is essential.This intellectual synthesis will be defined through several research endeavors. The first investigation introduces a set of experiments in which multiple fibroblast cell lines are converted to two terminal phenotypes: iN and iSM. The efficiency and infectability of cells subjected to each reprogramming regimen are then compared both statistically and quantitatively. This set of experiments also resulted in the development of novel analytical methods for measuring reprogramming efficiency and infectability. The second investigation features a critical review and statistical analysis of iPS reprogramming, specifically when compared to indirect reprogramming (SCNT-ES) and related stem-like cells. The third investigation is a review and theoretical synthesis which stakes out new directions in our understanding of the direct reprogramming process, including recent computational modeling endeavors and results from the iPS, iN and induced cardiomyocyte (iCM) experiments. To further unify the outcomes of these studies, additional results related to Chapter 2 and directions for future research will be presented. The additional results will allow for further interpretation and insight into the role of diversity in direct reprogramming. These future directions include both experimental approaches (a technique called mechanism disruption) and computational approaches (preliminary results for an agent-based population-level approximation of direct reprogramming). The insights provided here will hopefully provide a framework for theoretical development and a guide for traditional biologists and systems biologists alike.
Show less
- Title
- Computational identification and analysis of non-coding RNAs in large-scale biological data
- Creator
- Lei, Jikai
- Date
- 2015
- Collection
- Electronic Theses & Dissertations
- Description
-
Non-protein-coding RNAs (ncRNAs) are RNA molecules that function directly at the level of RNA without translating into protein. They play important biological functions in all three domains of life, i.e. Eukarya, Bacteria and Archaea. To understand the working mechanisms and the functions of ncRNAs in various species, a fundamental step is to identify both known and novel ncRNAs from large-scale biological data.Large-scale genomic data includes both genomic sequence data and NGS sequencing...
Show moreNon-protein-coding RNAs (ncRNAs) are RNA molecules that function directly at the level of RNA without translating into protein. They play important biological functions in all three domains of life, i.e. Eukarya, Bacteria and Archaea. To understand the working mechanisms and the functions of ncRNAs in various species, a fundamental step is to identify both known and novel ncRNAs from large-scale biological data.Large-scale genomic data includes both genomic sequence data and NGS sequencing data. Both types of genomic data provide great opportunity for identifying ncRNAs. For genomic sequence data, a lot of ncRNA identification tools that use comparative sequence analysis have been developed. These methods work well for ncRNAs that have strong sequence similarity. However, they are not well-suited for detecting ncRNAs that are remotely homologous. Next generation sequencing (NGS), while it opens a new horizon for annotating and understanding known and novel ncRNAs, also introduces many challenges. First, existing genomic sequence searching tools can not be readily applied to NGS data because NGS technology produces short, fragmentary reads. Second, most NGS data sets are large-scale. Existing algorithms are infeasible on NGS data because of high resource requirements. Third, metagenomic sequencing, which utilizes NGS technology to sequence uncultured, complex microbial communities directly from their natural inhabitants, further aggravates the difficulties. Thus, massive amount of genomic sequence data and NGS data calls for efficient algorithms and tools for ncRNA annotation.In this dissertation, I present three computational methods and tools to efficiently identify ncRNAs from large-scale biological data. Chain-RNA is a tool that combines both sequence similarity and structure similarity to locate cross-species conserved RNA elements with low sequence similarity in genomic sequence data. It can achieve significantly higher sensitivity in identifying remotely conserved ncRNA elements than sequence based methods such as BLAST, and is much faster than existing structural alignment tools. miR-PREFeR (miRNA PREdiction From small RNA-Seq data) utilizes expression patterns of miRNA and follows the criteria for plant microRNA annotation to accurately predict plant miRNAs from one or more small RNA-Seq data samples. It is sensitive, accurate, fast and has low-memory footprint. metaCRISPR focuses on identifying Clustered Regularly Interspaced Short Palindromic Repeats (CRISPRs) from large-scale metagenomic sequencing data. It uses a kmer hash table to efficiently detect reads that belong to CRISPRs from the raw metagonmic data set. Overlap graph based clustering is then conducted on the reduced data set to separate different CRSIPRs. A set of graph based algorithms are used to assemble and recover CRISPRs from the clusters.
Show less
- Title
- Novel computational approaches to investigate microbial diversity
- Creator
- Zhang, Qingpeng
- Date
- 2015
- Collection
- Electronic Theses & Dissertations
- Description
-
Species diversity is an important measurement of ecological communities.Scientists believe that there is a strong relationship between speciesdiversity and ecosystem processes. However efforts to investigate microbialdiversity using whole genome shotgun reads data are still scarce. With novel applications of data structuresand the development of novel algorithms, firstly we developed an efficient k-mer countingapproach and approaches to enable scalable streaming analysis of large and error...
Show moreSpecies diversity is an important measurement of ecological communities.Scientists believe that there is a strong relationship between speciesdiversity and ecosystem processes. However efforts to investigate microbialdiversity using whole genome shotgun reads data are still scarce. With novel applications of data structuresand the development of novel algorithms, firstly we developed an efficient k-mer countingapproach and approaches to enable scalable streaming analysis of large and error-prone short-read shotgun data sets. Then based on these efforts, we developed a statistical framework allowing for scalable diversity analysis of large,complex metagenomes without the need for assembly or reference sequences. Thismethod is evaluated on multiple large metagenomes from differentenvironments, such as seawater, human microbiome, soil. Given the velocity ingrowth of sequencing data, this method is promising for analyzing highlydiverse samples with relatively low computational requirements. Further, as themethod does not depend on reference genomes, it also provides opportunities totackle the large amounts of unknowns we find in metagenomicdatasets.
Show less
- Title
- Studying the effects of sampling on the efficiency and accuracy of k-mer indexes
- Creator
- Almutairy, Meznah
- Date
- 2017
- Collection
- Electronic Theses & Dissertations
- Description
-
"Searching for local alignments is a critical step in many bioinformatics applications and pipelines. This search process is often sped up by finding shared exact matches of a minimum length. Depending on the application, the shared exact matches are extended to maximal exact matches, and these are often extended further to local alignments by allowing mismatches and/or gaps. In this dissertation, we focus on searching for all maximal exact matches (MEMs) and all highly similar local...
Show more"Searching for local alignments is a critical step in many bioinformatics applications and pipelines. This search process is often sped up by finding shared exact matches of a minimum length. Depending on the application, the shared exact matches are extended to maximal exact matches, and these are often extended further to local alignments by allowing mismatches and/or gaps. In this dissertation, we focus on searching for all maximal exact matches (MEMs) and all highly similar local alignments (HSLAs) between a query sequence and a database of sequences. We focus on finding MEMs and HSLAs over nucleotide sequences. One of the most common ways to search for all MEMs and HSLAs is to use a k-mer index such as BLAST. A major problem with k-mer indexes is the space required to store the lists of all occurrences of all k-mers in the database. One method for reducing the space needed, and also query time, is sampling where only some k-mer occurrences are stored. We classify sampling strategies used to create k-mer indexes in two ways: how they choose k-mers and how many k-mers they choose. The k-mers can be chosen in two ways: fixed sampling and minimizer sampling. A sampling method might select enough k-mers such that the k-mer index reaches full accuracy. We refer to this sampling as hard sampling. Alternatively, a sampling method might select fewer k-mers to reduce the index size even further but the index does not guarantee full accuracy. We refer to this sampling as soft sampling. In the current literature, no systematic study has been done to compare the different sampling methods and their relative benefits/weakness. It is well known that fixed sampling will produce a smaller index, typically by roughly a factor of two, whereas it is generally assumed that minimizer sampling will produce faster query times since query k-mers can also be sampled. However, no direct comparison of fixed and minimizer sampling has been performed to verify these assumptions. Also, most previous work uses hard sampling, in which all similar sequences are guaranteed to be found. In contrast, we study soft sampling, which further reduces the k-mer index at a cost of decreasing query accuracy. We systematically compare fixed and minimizer sampling to find all MEMs between large genomes such as the human genome and the mouse genome. We also study soft sampling to find all HSLAs using the NCBI BLAST tool with the human genome and human ESTs. We use BLAST, since it is the most widely used tool to search for HSLAs. We compared the sampling methods with respect to index size, query time, and query accuracy. We reach the following conclusions. First, using larger k-mers reduces query time for both fixed sampling and minimizer sampling at a cost of requiring more space. If we use the same k-mer size for both methods, fixed sampling requires typically half as much space whereas minimizer sampling processes queries slightly faster. If we are allowed to use any k-mer size for each method, then we can choose a k-mer size such that fixed sampling both uses less space and processes queries faster than minimizer sampling. When identifying HSLAs, we find that soft sampling significantly reduces both index size and query time with relatively small losses in query accuracy. The results demonstrate that soft sampling is a simple but effective strategy for performing efficient searches for HSLAs. We also provide a new model for sampling with BLAST that predicts empirical retention rates with reasonable accuracy."--Pages ii-iii.
Show less
- Title
- Algebraic topology and machine learning for biomolecular modeling
- Creator
- Cang, Zixuan
- Date
- 2018
- Collection
- Electronic Theses & Dissertations
- Description
-
Data is expanding in an unprecedented speed in both quantity and size. Topological data analysis provides excellent tools for analyzing high dimensional and highly complex data. Inspired by the topological data analysis's ability of robust and multiscale characterization of data and motivated by the demand of practical predictive tools in computational biology and biomedical researches, this dissertation extends the capability of persistent homology toward quantitative and predictive data...
Show moreData is expanding in an unprecedented speed in both quantity and size. Topological data analysis provides excellent tools for analyzing high dimensional and highly complex data. Inspired by the topological data analysis's ability of robust and multiscale characterization of data and motivated by the demand of practical predictive tools in computational biology and biomedical researches, this dissertation extends the capability of persistent homology toward quantitative and predictive data analysis tools with an emphasis in biomolecular systems. Although persistent homology is almost parameter free, careful treatment is still needed toward practically useful prediction models for realistic systems. This dissertation carefully assesses the representability of persistent homology for biomolecular systems and introduces a collection of characterization tools for both macromolecules and small molecules focusing on intra- and inter-molecular interactions, chemical complexities, electrostatics, and geometry. The representations are then coupled with deep learning and machine learning methods for several problems in drug design and biophysical research. In real-world applications, data often come with heterogeneous dimensions and components. For example, in addition to location, atoms of biomolecules can also be labeled with chemical types, partial charges, and atomic radii. While persistent homology is powerful in analyzing geometry of data, it lacks the ability of handling the non-geometric information. Based on cohomology, we introduce a method that attaches the non-geometric information to the topological invariants in persistent homology analysis. This method is not only useful to handle biomolecules but also can be applied to general situations where the data carries both geometric and non-geometric information. In addition to describing biomolecular systems as a static frame, we are often interested in the dynamics of the systems. An efficient way is to assign an oscillator to each atom and study the coupled dynamical system induced by atomic interactions. To this end, we propose a persistent homology based method for the analysis of the resulting trajectories from the coupled dynamical system. The methods developed in this dissertation have been applied to several problems, namely, prediction of protein stability change upon mutations, protein-ligand binding affinity prediction, virtual screening, and protein flexibility analysis. The tools have shown top performance in both commonly used validation benchmarks and community-wide blind prediction challenges in drug design.
Show less
- Title
- The integration of computational methods and nonlinear multiphoton multimodal microscopy imaging for the analysis of unstained human and animal tissues
- Creator
- Murashova, Gabrielle Alyse
- Date
- 2019
- Collection
- Electronic Theses & Dissertations
- Description
-
Nonlinear multiphoton multimodal microscopy (NMMM) used in biological imaging is a technique that explores the combinatorial use of different multiphoton signals, or modalities, to achieve contrast in stained and unstained biological tissues. NMMM is a nonlinear laser-matter interaction (LMI), which utilizes multiple photons at once (multiphoton processes, MP). The statistical probability of multiple photons arriving at a focal point at the same time is dependent on the two-photon absorption ...
Show moreNonlinear multiphoton multimodal microscopy (NMMM) used in biological imaging is a technique that explores the combinatorial use of different multiphoton signals, or modalities, to achieve contrast in stained and unstained biological tissues. NMMM is a nonlinear laser-matter interaction (LMI), which utilizes multiple photons at once (multiphoton processes, MP). The statistical probability of multiple photons arriving at a focal point at the same time is dependent on the two-photon absorption (TPA) cross-section of the molecule being studied and is incredibly difficult to satisfy using typical incoherent light, say from a light bulb. Therefore, the stimulated emission of coherent photons by pulsed lasers are used for NMMM applications in biomedical imaging and diagnostics.In this dissertation, I hypothesized that due to the near-IR wavelength of the Ytterbium(Yb)-fiber laser (1070 nm), the four MP-two-photon excited fluorescence (2PEF), second harmonic generation (SHG), three-photon excited fluorescence (3PEF) and third harmonic generation (THG), generated by focusing this ultrafast laser, will provide contrast to unstained tissues sufficient for augmenting current histological staining methods used in disease diagnostics. Additionally, I hypothesized that these NMMM images (NMMMIs) can benefit from computational methods to accurately separate their overlapping endogenous MP signals, as well as train a neural network for image classification to detect neoplastic, inflammatory, and healthy regions in the human oral mucosa. Chapter II of this dissertation explores the use of NMMM to study the effects of storage on donated red blood cells (RBCs) using non-invasive 2PEF and THG without breaching the blood storage bag. Unlike the lack of RBC fluorescence previously reported, we show that with two-photon (2P) excitation from an 800 nm source, and three-photon (3P) excitation from a 1060 nm source, there was sufficient fluorescent signal from hemoglobin as well as other endogenous fluorophores. Chapter III employs NMMM to establish the endogenous MP signals present in healthy excised and unstained mouse and Cynomolgus monkey retinas using 2PEF, 3PEF, SHG, and THG. We show the first epi-direction detected cross-section and depth-resolved images of unstained isolated retinas obtained using NMMM with an ultrafast fiber laser centered at 1070 nm and a 303038 fs pulse. Two spectrally and temporally distinct regions were shown; one from the nerve fiber layer (NFL) to the inner receptor layer (IRL), and one from the retinal pigmented epithelium (RPE) and choroid. Chapter IV focuses on the use of minimal NMMM signals from a 1070 nm Yb-fiber laser to match and augment H&E-like contrast in human oral squamous cell carcinoma (OSCC) biopsies. In addition to performing depth-resolved (DR) imaging directly from the paraffin block and matching H&E-like contrast, we showed how the combination of characteristic inflammatory 2PEF signals undetectable in H&E stained tissues and SHG signals from stromal collagen can be used to analytical distinguish healthy, mild and severe inflammatory, and neoplastic regions and determine neoplastic margins in a three-dimensional (3D) manner. Chapter V focuses on the use of computational methods to solve an inverse problem of the overlapping endogenous fluorescent and harmonic signals within mouse retinas. The least-squares fitting algorithm was most effective at accurately assigning photons from the NMMMIs to their source. This work, unlike commercial software, permits using custom signal source reference spectra from endogenous molecules, not from fluorescent tags and stains. Finally, Chapter VI explores the use of the OSCC images to train a neural network image classifier to achieve the overall goal of classifying the NMMMIs into three categories-healthy, inflammatory, and neoplastic. This work determined that even with a small dataset (< 215 images), the features present in NMMMIs in combination with tiling, transfer learning can train an image classifier to classify healthy, inflammatory, and neoplastic OSCC regions with 70% accuracy.My research successfully shows the potential of using NMMM in tandem with computational methods to augment current diagnostic protocols used by the health care system with the potential to improve patient outcomes as well as decrease pathology departmental costs. These results should facilitate the continued study and development of NMMM so that in the future, NMMM can be used for clinical applications.
Show less
- Title
- Postmortem microbiome computational methods and applications
- Creator
- Kaszubinski, Sierra Frances
- Date
- 2020
- Collection
- Electronic Theses & Dissertations
- Description
-
Microbial communities have potential evidential utility for forensic applications. However, bioinformatic analysis of high-throughput sequencing data varies widely among laboratories and can potentially affect downstream forensic analyses and data interpretations. To illustrate the importance of standardizing methodology, we compared analyses of postmortem microbiome samples using several bioinformatic pipelines, while varying minimum library size or the minimum number of sequences per sample...
Show moreMicrobial communities have potential evidential utility for forensic applications. However, bioinformatic analysis of high-throughput sequencing data varies widely among laboratories and can potentially affect downstream forensic analyses and data interpretations. To illustrate the importance of standardizing methodology, we compared analyses of postmortem microbiome samples using several bioinformatic pipelines, while varying minimum library size or the minimum number of sequences per sample, and sample size. Using the same input sequence data, we found that pipeline significantly affected the microbial communities. Increasing minimum library size and sample size increased the number of low abundant and infrequent taxa detected. Our results show that bioinformatic pipeline and parameter choice significantly affect the resulting microbial communities, which is important for forensic applications. One such forensic application is the potential postmortem reflection of manner of death (MOD) and cause of death (COD). Microbial community metrics have linked the postmortem microbiome with antemortem health status. To further explore this association, we demonstrated that postmortem microbiomes could differentiate beta-dispersion among M/COD, especially for cardiovascular disease and drug-related deaths. Beta-dispersion associated with M/COD has potential forensic utility to aid certifiers of death by providing additional evidence for death determination. Additional supplemental files including tables of raw data and additional statistical tests are available in supplemental files online, denoted in the text as table 'S'.
Show less