Introduction
The microbiome is formed from the ecological communities of microorganisms that dominate the living world. Bacteria can now be identified through the use of next generation sequencing applied at several levels. Shotgun sequencing of all bacteria in a sample delivers knowledge of all the genes present. Here we will only be interested in the identification and quantification of individual taxa (or species) through a ‘fingerprint gene’ called 16s rRNA which is present in all bacteria. This gene presents several variable regions which can be used to identify the different taxa.
Previous standard workflows depended on clustering all 16s rRNA sequences (generated by next generation amplicon sequencing) that occur within a 97% radius of similarity and then assigning these to ‘Operational Taxonomic Units’ (OTUs) from reference trees 1, 2. These approaches do not make use of all the data, in particular sequence quality scores and statistical information available on the reads were not incorporated into the assignments. In contrast, the de novo read counts used here will be constructed through the incorporation of both the quality scores and sequence frequencies in a probabilistic noise model for nucleotide transitions. For more details on the algorithmic implementation of this step see 3.
After filtering the sequences and removing the chimerae, the data are compared to a standard database of bacteria and labeled. In this workflow, we have used the labeled sequences to build a de novo phylogenetic with the phangorn package.
The key step in the sequence analysis is the manner in which reads are denoised and assembled into groups we have chosen to call RSVs (Ribosomal Sequence Variants) instead of the traditional OTUs.
This article describes a computational workflow for performing denoising, filtering, data transformations, visualization, supervised learning analyses, community network tests, hierarchical testing and linear models. We provide all the code and give several examples of different types of analyses and use cases. There are often many different objectives in experiments involving microbiome data and we will only give a flavor for what could be possible once the data has been imported into R. In addition, the code can be easily adapted to accommodate batch effects, covariates and multiple experimental factors.
The workflow is based on software packages from the open-source Bioconductor project
4. We describe a complete project pipeline, from the denoising and identification of reads input as raw
Methods
Amplicon bioinformatics: from raw reads to tables
This section demonstrates the “full stack” of amplicon bioinformatics: construction of the sample-by-sequence feature table from the raw reads, assignment of taxonomy and creation of the phylogenetic tree relating the sample sequences.
First we load the necessary packages.
The data we will process here are highly-overlapping Illumina Miseq 2×250 amplicon sequences from the V4 region of the 16S gene 5. These 360 fecal samples were collected from 12 mice longitudinally over the first year of life, to investigate the development and stabilization of the murine microbiome 6. These data can be downloaded from the following location: http://www.mothur.org/MiSeqDevelopmentData/StabilityNoMetaG.tar.
Trim and filter
We begin by filtering out low-quality sequencing reads and trimming the reads to a consistent length. While generally recommended filtering and trimming parameters serve as a starting point, no two datasets are identical and therefore it is always worth inspecting the quality of the data before proceeding.
Most Illumina sequencing data show a trend of decreasing average quality towards the end of sequencing reads. Figure 1 demonstrates that the forward reads maintain high quality throughout, while the quality of the reverse reads drops significantly at about position 160. Therefore, we choose to truncate the forward reads at position 245, and the reverse reads at position 160. We also choose to trim the first 10 nucleotides of each read based on empirical observations across many Illumina datasets that these base positions are particularly likely to contain pathological errors.
Figure 1.
Forward and reverse quality profiles.
We combine these trimming parameters with standard filtering parameters, the most important being the enforcement of a maximum of two expected errors per read 7. Trimming and filtering is performed on paired reads jointly – both reads must pass the filter for the pair to pass.
Infer sequence variants
After filtering, the typical amplicon bioinformatics workflows cluster sequencing reads into OTUs: groups of sequencing reads that differ by less than a fixed dissimilarity threshold. Here we instead use the high-resolution DADA2 method to to infer sequence variants without any fixed threshold, thereby resolving variants that differ by as little as one nucleotide 3.
The sequence data are imported into R from demultiplexed fastq files (i.e. one fastq for each sample) and simultaneously dereplicated to remove redundancy. We name the resulting ‘derep-class’ objects by their sample.
Figure 2.
Forward and reverse error profile estimates, showing the frequencies of each type of nucleotide transition as a function of quality.
The DADA2 method relies on a parameterized model of substitution errors in order to distinguish sequencing errors from real biological variation. Because error rates can – and often do – vary substantially between sequencing runs and PCR protocols, the model parameters can be discovered from the data itself using a form of unsupervised learning in which sample inference is alternated with parameter estimation until both are jointly consistent.
Parameter learning is computationally intensive, as it requires multiple iterations of the sequence inference algorithm, and therefore it is often useful to estimate the error rates from a (sufficiently large) subset of the data.
In order to verify that the error rates have been reasonably well-estimated, we inspect the fit between the observed error rates (black points) and the fitted error rates (black lines).
The DADA2 sequence inference method can run in two different modes: independent inference by sample (
Sequence inference removed nearly all substitution and indel errors from the data. We now merge together the inferred forward and reverse sequences, while removing paired sequences that do not perfectly overlap as a final control against residual errors.
Construct the sequence table and remove chimeras
The DADA2 method produces a sequence table that is a higher-resolution analogue to the common OTU table. This is a sample by sequence feature table whose entries are the number of times each sequence was observed in each sample.
Notably, chimeras have not yet been removed. The error model in the sequence inference algorithm does not include a chimera component, and therefore we expect this sequence table to include many chimeric sequences. We now remove chimeric sequences by comparing each inferred sequence to the other sequences in the table, and removing those that can be reproduced by stitching together two more abundant sequences.
Typically a substantial fraction of inferred sequence variants, but only a small fraction of all reads, are found to be chimeric. That is what is observed here: 1502 of 1892 sequence variants were chimeric, but these only represented 10% of all reads.
Assign taxonomy
One of the benefits of using well-classified marker loci like the 16S rRNA gene is the ability to taxonomically classify the sequenced variants. The dada2 package implements the naive Bayesian classifier method for this purpose 8. This classifier compares sequence variants to a training set of classified sequences. Here we use the RDP v14 training set 9.
Training set
Construct the phylogenetic tree
Phylogenetic relatedness is commonly used to inform downstream analyses, especially the calculation of phylogeny-aware distances between microbial communities. The DADA2 sequence inference method is reference-free, so we must construct the phylogenetic tree relating the inferred sequence variants de novo. We begin by performing a multiple-alignment of the inferred sequences.
The phangorn package is then used to construct a phylogenetic tree. Here we first construct a neighbor-joining tree, and then fit a GTR+G+I maximum likelihood tree using the neighbor-joining tree as a starting point.
This completes the amplicon bioinformatics portion of the workflow.
Combine data into a phyloseq object
The
phyloseq package organizes and synthesizes the different data types from a typical amplicon sequencing experiment into a single data object that can be easily manipulated. The last bit of information needed is the sample data contained in a
The full suite of data for this study – the sample-by-sequence feature table, the sample metadata, sequence taxonomies, and the phylogenetic tree – can now be combined into a single object.
Manipulating the data with phyloseq
phyloseq 10 is an R package that allows users to import, store, analyze, and graphically display complex phylogenetic sequencing data that has already been clustered into Operational Taxonomic Units (OTUs) or appropriately denoised and collected under Ribosomal Sequence Variants (RSVs). The package is most useful when there is also associated multitype sample data, a phylogeny, and/or taxonomic assignment of each taxa. phyloseq leverages and builds upon many of the tools available in R for ecology and phylogenetic analysis ( vegan 11, ade4 12, ape 13), while also using advanced and flexible layered graphic systems ( ggplot2 14) to easily produce publication-quality graphics of complex phylogenetic data. The phyloseq package uses a specialized system of S4 data classes to store all related phylogenetic sequencing data as a single, self-consistent, self-describing experiment-level object, making it easier to share data and reproduce analyses. In general, phyloseq seeks to facilitate the use of R for efficient interactive and reproducible analysis of amplicon count data jointly with important sample covariates.
Further documentation and use cases
This article shows a useful workflow, but many more analyses are available in phyloseq, and R in general, than can fit in a single example. The phyloseq home page is a good place to begin browsing additional phyloseq documentation, as are the three vignettes included within the package, and linked directly at the phyloseq release page on Bioconductor.
Loading the data
Many use cases result in the necessity to import and combine different data into a
phyloseq class object, this can be done using the
In the previous section the results of dada2 sequence processing were organized into a phyloseq object. This object was also saved in R-native serialized RDS format. We will reload this here for completeness.
Shiny-phyloseq
It can be beneficial to start the data exploration process interactively, this often saves time in detecting outliers and specific features of the data. Shiny-phyloseq 15 is an interactive web application that provides a graphical user interface to the phyloseq package. The object just loaded into the R session in this workflow is suitable for this graphical interaction with Shiny-phyloseq.
Prevalence filtering
phyloseq provides useful tools for filtering, subsetting, and agglomerating taxa – a task that is often appropriate or even necessary for effective analysis of microbiome count data. In this subsection, we graphically explore the prevalence of taxa in the example dataset, and demonstrate how this can be used as a filtering criteria. One of the reasons to filter in this way is to avoid spending much time analyzing taxa that were only rarely seen. This also turns out to be a useful filter of noise (taxa that are actually just artifacts of the data collection process), a step that should probably be considered essential for datasets constructed via heuristic OTU-clustering methods, which are notoriously prone to generating spurious taxa.
Figure 3.
Taxa prevalence v. total counts.
Each point is a different taxa. Exploration of the data in this way is often useful for selecting filtering parameters, like the minimum prevalence criteria we will used to filter the data above.
Agglomerate closely related taxa
For some experimental questions, it is useful to agglomerate closely-related taxa. In this subsection we explore two separate ways in which closely-related taxa can be grouped together as a single feature in the phyloseq data object. Note that this is only helpful if a biological phenomenon of interest actually occurs at the chosen level of agglomeration, in which case the grouping-together of these features can increase statistical power.
Use
phyloseq’s
Figure 4.
The original tree (left), taxonomic agglomeration at Genus rank (middle), phylogenetic agglomeration at a fixed distance of 0.4 (right).
Abundance value transformation
It is usually necessary to transform microbiome count data to account for differences in library size, variance, scale, etc. The
phyloseq package provides a flexible interface for defining new functions to accomplish these transformations of the abundance values via the
This example begins by defining a custom plot function,
The transformation in this case converts the counts from each sample into their frequencies, often referred to as proportions or relative abundances. This function is so simple that it is easiest to define it within the function call to transform_sample_counts.
Now we plot the abundance values before and after transformation. The results are in Figure 5.
Figure 5.
Comparison of original abundances (top panel) and relative abundances (lower), both are shown on a log scale.
Subset by taxonomy
Notice on the previous plot that
Lactobacillales appears to be a taxonomic Order with bimodal abundance profile in the data. We can check for a taxonomic explanation of this pattern by plotting just that taxonomic subset of the data. For this, we subset with the
Figure 6.
Violin plot of the relative abundances of Lactobacillales taxonomic Order, grouped by host sex and genera.
Here it is clear that the apparent biomodal distribution of Lactobacillales on the previous plot was the result of a mixture of two different genera, with the typical Lactobacillus relative abundance much larger than Streptococcus.
At this stage in the workflow, after converting raw reads to interpretable species abundances, and after filtering and transforming these abundances to focus attention on scientifically meaningful quantities, we are in a position to consider more careful statistical analysis. R is an ideal environment for performing these analyses, as it has an active community of package developers building simple interfaces to sophisticated techniques. As a variety of methods are available, there is no need to commit to any rigid analysis strategy a priori. Further, the ability to easily call packages without reimplementing methods frees researchers to iterate rapidly through alternative analysis ideas. The advantage of performing this full workflow in R is that this transition from bioinformatics to statistics is effortless.
We back these claims by illustrating several analysis on the mouse data prepared above. We experiment with several flavors of exploratory ordination before shifting to more formal testing and modeling, explaining the settings in which the different points of view are most appropriate. Finally, we provide example analysis of multitable data, using a study in which both metabolomic and microbial abundance measurements were collected on the same samples, to demonstrate that the general workflow presented here can be adapted to the multitable setting.
Preprocessing
Before doing the multivariate projections, we will add a few columns to our sample data, which can then be used to annotate plots. From Figure 7, we see that the ages of the mice come in a couple of groups, and so we make a categorical variable corresponding to young, middle-aged, and old mice. We also record the total number of counts seen in each sample and log-transform the data as an approximate variance stabilizing transformation.
Figure 7.
Preliminary plots suggest certain preprocessing steps.
The histogram on the left motivates the creation of a new categorical variable, binning age into one of the three peaks. The histogram on the right suggests that a
For a first pass, we look at principal coordinates analysis (PCoA) with either the Bray-Curtis dissimilarity on the weighted Unifrac distance. We see immediately that there are six outliers. These turn out to be the samples from females 5 and 6 on day 165 and the samples from males 3, 4, 5, and 6 on day 175. We will take them out, since we are mainly interested in the relationships between the non-outlier points.
Figure 8.
An ordination on the logged abundance data reveals a few outliers.
Notice that the variability explained by the second axis is five times less than that of the horizontal axis.
Before we continue, we should check the two female outliers – they have been taken over by the same RSV, which has a relative abundance of over 90% in each of them. This is the only time in the entire data set that this RSV has such a high relative abundance – the rest of the time it is below 20%. In particular, its diversity is by far the lowest of all the samples.
Figure 9.
The outlier samples are dominated by a single RSV.
Different ordination projections
As we have seen, an important first step in analyzing microbiome data is to do unsupervised, exploratory analysis. This is simple to do in phyloseq, which provides many distances and ordination methods.
After documenting the outliers, we are going to compute ordinations with these outliers removed and more carefully study the output. We see that there is a fairly substantial age effect that is consistent between all the mice, male and female, and from different litters. We’ll first perform a PCoA using Bray-Curtis dissimilarity.
The first plot shows the ordination of the samples, and we see that the second axis corresponds to an age effect, with the samples from the younger and older mice separating fairly well. The first axis correlates fairly well with library size (this is not shown). The first axis explains about twice the variability than the first, this translates into the elongated form of the ordination plot.
Figure 10.
A PCoA plot using Bray-Curtis distance between samples.
Next we look at double principal coordinates analysis (DPCoA) 16, 17, which is a phylogenetic ordination method and which gives a biplot representation of both samples and taxonomic categories. We see again that the second axis corresponds to young vs. old mice, and the biplot suggests an interpretation of the second axis: samples that have larger scores on the second axis have more taxa from Bacteroidetes and one subset of Firmicutes.
Figure 11.
A DPCoA plot incorporates phylogenetic information, but is dominated by the first axis.
Figure 12.
The DPCoA sample positions can be interpreted with respect to the species coordinates in this display.
Finally, we can look at the results of PCoA with weighted Unifrac. As before, we find that the second axis is associated with an age effect, which is fairly similar to DPCoA. This is not surprising, because both are phylogenetic ordination methods taking abundance into account. However, when we compare biplots, we see that the DPCoA gave a much cleaner interpretation of the second axis, compared to weighted Unifrac.
PCA on ranks
Microbial abundance data are often heavy-tailed, and sometimes they can be hard to identify a transformation that brings the data to normality. In these cases, it can be safer to ignore the raw abundances altogether, and work instead with ranks. We demonstrate this idea using a rank-transformed version of the data to perform PCA. First, we create a new matrix, representing the abundances by their ranks, where the microbe with the smallest abundance in a sample gets mapped to rank 1, second smallest rank 2, etc.
Naively using these ranks would make differences between pairs of low and high abundance microbes comparable. In the case where many bacteria are absent or present at trace amounts, an artificially large difference in rank could occur 18 for minimally abundant taxa. To avoid this, all those microbes with rank below some threshold are set to be tied at 1. The ranks for the other microbes are shifted down, so there is no large gap between ranks. This transformation is illustrated in Figure 15.
Figure 13.
The sample positions produced by a PCoA using weighted Unifrac.
Figure 14.
Species coordinates that can be used to interpret the sample positions from PCoA with weighted Unifrac.
Compared to the representation in Figure 12, this display is harder to interpret.
Figure 15.
The association between abundance and rank, for a few randomly selected samples.
The numbers of the y-axis are those supplied to PCA.
We can now perform PCA and study the resulting biplot, given in Figure 16. To produce annotation for this figure, we used the following block.
The results are similar to the PCoA analyses computed without applying a truncated-ranking transformation, reinforcing our confidence in the analysis on the original data.
Figure 16.
The biplot resulting from the PCA after the truncated-ranking transformation.
Canonical correspondence
Canonical Correspondence Analysis (CCpnA) is an approach to ordination of a species by sample table that incorporates supplemental information about the samples. As before, the purpose of creating biplots is to determine which types of bacterial communities are most prominent in different mouse sample types. It can be easier to interpret these biplots when the ordering between samples reflects sample characteristics – variations in age or litter status in the mouse data, for example – and this is central to the design of CCpnA.
The function allows us to create biplots where the positions of samples are determined by similarity in both species signatures and environmental characteristics; in contrast, principal components analysis or correspondence analysis only look at species signatures. More formally, it ensures that the resulting CCpnA directions lie in the span of the environmental variables; thorough treatments are available in 19, 20.
Like PCoA and DPCoA, this method can be run using
To access the positions for the biplot, we can use the
Figure 17 and Figure 18 plot these annotated scores, splitting sites by their age bin and litter membership, respectively. We have labeled individual microbes that are outliers along the second CCpnA direction.
Evidently, the first CCpnA direction distinguishes between mice in the two main age bins. Circles on the left and right of the biplot represent microbes that are characteristic of younger and older mice, respectively. The second CCpnA direction splits off the few mice in the oldest age group; it also partially distinguishes between the two litters. These samples low in the second CCpnA direction have more of the outlier microbes than the others.
This CCpnA analysis supports our conclusions from the earlier ordinations – the main difference between the microbiome communities of the different mice lies along the age axis. However, in situations where the influence of environmental variables is not so strong, CCpnA can have more power in detecting such associations. In general, it can be applied whenever it is desirable to incorporate supplemental data, but in a way that (1) is less aggressive than supervised methods, and (2) can use several environmental variables at once.
Figure 17.
The mouse and microbe scores generated by CCpnA.
The sites and species are triangles and circles, respectively. The separate panels indicate different age groups.
Figure 18.
The analogue to Figure 17, faceting by litter membership rather than age bin.
Supervised learning
Here we illustrate some supervised learning methods that can be easily run in R. The caret package wraps many prediction algorithms available in R and performs parameter tuning automatically. Since we saw that microbiome signatures change with age, we’ll apply supervised techniques to try to predict age from microbiome composition.
We’ll first look at Partial Least Squares (PLS)
21. The first step is to divide the data into training and test sets, with assignments
done by mouse, rather than by sample, to ensure that the test set realistically simulates the collection of new data. Once we split the data, we can use the
Next we can predict class labels on the test set using the
As another example, we can try out random forests. This is run in exactly the same way as PLS, by switching the
To interpret these PLS and random forest results, it is standard to produce biplots and proximity plots, respectively. The code below extracts coordinates and supplies annotation for points to include on the PLS biplot.
The resulting biplot is displayed in Figure 19; it can be interpreted similarly to earlier ordination diagrams, with the exception that the projection is chosen with an explicit reference to the binned age variable. Specifically, PLS identifies a subspace to maximize discrimination between classes, and the biplot displays sample projections and RSV coefficients with respect to this subspace.
Figure 19.
PLS produces a biplot representation designed to separate samples by a response variable.
A random forest proximity plot is displayed in Figure 20. To generate this representation, a distance is calculated between samples based on how frequently sample occur in the same tree partition in the random forest’s bootstrapping procedure. If a pair of samples frequently occur in the same partition, the pair is assigned a low distance. The resulting distances are then input to PCoA, giving a glimpse into the random forests’ otherwise complex classification mechanism. The separation between classes is clear, and manually inspecting points would reveal what types of samples are easier or harder to classify.
Figure 20.
The random forest model determines a distance between samples, which can be input into PCoA to produce a proximity plot.
To further understand the fitted random forest model, we identify the microbe with the most influence in the random forest prediction. This turns out to be a microbe in family Lachnospiraceae and genus Roseburia. Figure 21 plots its abundance across samples; we see that it is uniformly very low from age 0 to 100 days and much higher from age 100 to 400 days.
Figure 21.
A microbe in genus Roseburia becomes much more abundant in the 100 to 400 day bin.
Graph-based visualization and testing:Creating and plotting graphs
Phyloseq has functionality for creating graphs based on thresholding a distance matrix, and the resulting networks can be plotting using
ggnetwork. This package overloads the ggplot syntax, so you can use the function ggplot on an igraph object and add
Figure 22.
A network created by thresholding the Jaccard dissimilarity matrix.
The colors represent which mouse the sample came from and the shape represents which litter the mouse was in.
Graph-based two-sample tests
Graph-based two-sample tests were introduced by Friedman and Rafsky 22 as a generalization of the Wald-Wolfowitz runs test. They proposed the use of a minimum spanning tree (MST) based on the distances between the samples, and then counting the number of edges on the tree that were between samples in different groups. It is not necessary to use an MST, graphs made by linking nearest neighbors 23 or distance thresholding can also be input. No matter what graph we build between the samples, we can approximate a null distribution by permuting the labels of the nodes of the graph.
MST and Jaccard
We first perform a test using an MST with Jaccard dissimilarity. We want to know whether the two litters (
This test has a small p-value, and we reject the null hypothesis that the two samples come from the same distribution. From the plot of the minimum spanning tree in Figure 23, we see by eye that the samples group by litter more than we would expect by chance.
Figure 23.
The graph and permutation histogram obtained from the minimal spanning tree with Jaccard similarity.
Nearest neighbors and Jaccard graph
The k-nearest neighbors graph is obtained by putting an edge between two samples whenever one of them is in the set of k-nearest neighbors of the other. We see from Figure 24 that if a pair of samples has an edge between them in the nearest neighbor graph, they are overwhelmingly likely to be in the same litter.
Figure 24.
The graph and permutation histogram obtained from a nearest-neighbor graph with Jaccard similarity.
Two-nearest neighbors and Bray-Curtis
We can compute the analogous test with two-nearest neighbors and the Bray-Curtis dissimilarity. The results are in Figure 25.
Figure 25.
The graph and permutation histogram obtained from a two nearest-neighbor graph with Jaccard similarity.
Distance threshold and Bray-Curtis
Another way of making a graph between samples is to threshold the distance matrix, this is called a geometric graph 24. The testing function lets the user supply an absolute distance threshold; alternatively, it can find a distance threshold such that there are a prespecified number of edges in the graph. Below we use a distance threshold so that there are 720 edges in the graph, or twice as many edges as there are samples. Heuristically, the graph we obtain isn’t as good, because there are many singletons. This reduces power, and so if the thresholded graph has this many singletons it is better to either modify the threshold or consider a MST or k-nearest neighbors graph.
Figure 26.
Testing using a Bray-Curtis distance thresholded graph.
Then we can try a similar procedure with an increased number of edges to see what happens.
Figure 27.
The analogue to Figure 26, but with a less stringent criterion for declaring edges.
Linear modeling
It is often of interest to evaluate the degree to which microbial community diversity reflects characteristics of the environment from which it was sampled. Unlike ordination, the purpose of this analysis is not to develop a representation of many microbes with respect to sample characteristics; rather, it is to describe how a single measure of overall community structure (In particular, it need not be limited to diversity – defining univariate measures of community stability is also common, for example.) is associated with sample characteristics. This is a somewhat simpler statistical goal, and can be addressed through linear modeling, for which there are a range of approaches in R. As an example, we will used a mixed-effects model to study the relationship between mouse microbial community diversity and the age and litter variables that have been our focus so far. This choice was motivated by the observation that younger mice have noticeably lower Shannon diversities, but that different mice have different baseline diversities. The mixed-effects model is a starting point for formalizing this observation.
We first compute the Shannon diversity associated with each sample and join it with sample annotation.
We use the nlme package to estimate coefficients for this mixed-effects model.
To interpret the results, we compute the prediction intervals for each mouse by age bin combination. These are displayed in Figure 28. The intervals reflect the slight shift in average diversity across ages, but the wide intervals emphasize that more samples would be needed before this observation can be confirmed.
Figure 28.
Each point represents the Shannon diversity at one timepoint for a mouse; each panel is a different mouse.
The timepoints have been split into three bins, according to the mices’ age. The prediction intervals obtained from mixed-effects modeling are overlaid.
Hierarchical multiple testing
Hypothesis testing can be used to identify individual microbes whose abundance relates to sample variables of interest. A standard approach is to compute a test statistic for each microbe individually, measuring its association with sample characteristics, and then jointly adjust p-values to ensure a False Discovery Rate upper bound. This can be accomplished through the Benjamini-Hochberg procedure, for example 25. However, this procedure does not exploit any structure among the tested hypotheses – for example, it is likely that if one Ruminococcus species is strongly associated with age, then others are as well. To integrate this information 26, 27, proposed a hierarchical testing procedure, where taxonomic groups are only tested if higher levels are found to be be associated. In the case where many related species have a slight signal, this pooling of information can increase power.
We apply this method to test the association between microbial abundance and age. This provides a complementary view of the earlier analyses, identifying individual microbes that are responsible for the differences between young and old mice.
We digress briefly from hierarchical testing to describe an alternative form of count normalization. Rather than working with the logged data as in our earlier analysis, we consider a variance stabilizing transformation introduced by 28 for RNA-seq data and in 29 for 16S rRNA generated count data and available in the DESeq2 package. The two transformations yield similar sets of significant microbes. One difference is that, after accounting for size factors, the histogram of row sums for DESeq is more spread out in the lower values, refer to Figure 29. This is the motivation of using such a transformation, although for high abundance counts, it is equivalent to the log, for lower and mid range abundances it does not crush the data and yields more powerful results. The code below illustrates the mechanics of computing DESeq2’s variance stabilizing transformation on a phyloseq object.
Figure 29.
The histogram on the top gives the total DESeq2 transformed abundance within each sample.
The bottom histogram is the same as that in Figure 7, and is included to facilitate comparison.
We use structSSI to perform the hierarchical testing 30. For more convenient printing, we first shorten the names of each microbe.
Unlike standard multiple hypothesis testing, the hierarchical testing procedure needs univariate tests for each higher-level taxonomic group, not just every microbe. A helper function,
We can now correct p-value using the hierarchical testing procedure. The test results are guaranteed to control several variants of FDR control, but at different levels; we defer details to 26, 27, 30.
The plot opens in a new browser – a static screenshot of a subtree is displayed in Figure 30. Nodes are shaded according to p-values, from blue to orange, representing the strongest to weakest associations. Grey nodes were never tested, to focus power on more promising subtrees. Scanning the full tree, it becomes clear that the association between age group and microbe abundance is present in only a few isolated taxonomic groups, but that it is quite strong in those groups. To give context to these results, we can retrieve the taxonomic identity of the rejected hypotheses.
Figure 30.
A screenshot of a subtree with many differentially abundant microbes, as determined by the hierarchical testing procedure.
Currently the user is hovering over the node associated with microbe GCGAG.33; this causes the adjusted p-value (0.0295) to appear.
It seems that the most strongly associated microbes all belong to family Lachnospiraceae, which is consistent with the random forest results in Section.
Multitable techniques
Many microbiome studies attempt to quantify variation in the microbial, genomic, and metabolic measurements across different experimental conditions. As a result, it is common to perform multiple assays on the same biological samples and ask what features – microbes, genes, or metabolites, for example – are associated with different sample conditions. There are many ways to approach these questions, which to apply depends on the study's focus.
Here, we will focus on one specific workflow that uses sparse Canonical Correlation Analysis (sparse CCA), a method well-suited to both exploratory comparisons between samples and the identification of features with interesting variation. We will use an implementation from package PMA 31.
Since the mouse data used above included only a single table, we use a new data set, collected by the study 32. There are two tables here, one for microbes and another with metabolites. Twelve samples were obtained, each with measurements at 637 m/z values and 20,609 OTUs; however, about 96% of the entries of the microbial abundance table are exactly zero. The code below retrieves this data.
Our preprocessing mirrors that done for the mouse data. We first filter down to microbes and metabolites of interest, removing those that are zero across many samples. Then, we transform them to weaken the heavy tails.
We can now apply sparse CCA. This method compares sets of features across high-dimensional data tables, where there may be more measured features than samples. In the process, it chooses a subset of available features that capture the most covariance – these are the features that reflect signals present across multiple tables. We then apply PCA to this selected subset of features. In this sense, we use sparse CCA as a screening procedure, rather than as an ordination method.
Our implementation is below. The parameters
With these parameters, five microbes and 15 metabolites have been selected, based on their ability to explain covariation between tables. Further, these 20 features result in a correlation of 0.974 between the two tables. We interpret this to mean that the microbial and metabolomic data reflect similar underlying signals, and that these signals can be approximated well by the 20 selected features. Be wary of the correlation value, however, since the scores are far from the usual bivariate normal cloud. Further, note that it is possible that other subsets of features could explain the data just as well – sparse CCA has minimized redundancy across features, but makes no guarantee that these are the "true" features in any sense.
Nonetheless, we can still use these 20 features to compress information from the two tables without much loss. To relate the recovered metabolites and OTUs to characteristics of the samples on which they were measured, we use them as input to an ordinary PCA.
Figure 31 displays a PCA triplot, where we show different types of samples and the multidomain features (Metabolites and OTUs). This allows comparison across the measured samples – triangles for Knockout and circles for wild type – and characterizes the influence the different features – diamonds with text labels. For example, we see that the main variation in the data is across PD and ST samples, which correspond to the different diets. Further, large values of 15 of the features are associated with ST status, while small values for 5 of them indicate PD status. The advantage of the sparse CCA screening is now clear – we can display most of the variation across samples using a relatively simple plot, and can avoid plotting the hundreds of additional points that would be needed to display all of the features.
Figure 31.
A PCA triplot produced from the CCA selected features in from muliple data types (metabolites and OTUs).
Operation
The programs and source for this article can be run using version 3.3 of R and version 3.3 of Bioconductor.
Conclusions
We have shown how a complete workflow in R is now available to denoise, identify and normalize next generation amplicon se quencing reads using probabilistic models with parameters fit using the data at hand.
We have provided a brief overview of all the analyses that become possible once the data has been imported into the R environment. Multivariate projections using the phylogenetic tree as the relevant distance between OTUs/RSVs can be done using weighted Unifrac or double principal coordinate analyses using the phyloseq package. Biplots provide the user with an interpretation key. These biplots have been extended to triplots in the case of multidomain data incorporating genetic, metabolic and taxa abundances. We illustrate the use of network based analyses, whether the community graph is provided from other sources or from a taxa co-occurrence computation using a Jaccard distance.
We have briefly covered a small example of using two supervised learning functions (random forests and partial least squares) to predict a response variable.
The main challenges in tackling microbiome data come from the many different levels of heterogeneity both at the input and output levels. These are easily accommodated through R's capacity to combine data into S4 classes. We are able to include layers of information, trees, sample data description matrices and contingency tables in the phyloseq data structures. The plotting facilities of ggplot2 and ggnetwork allow for the layering of information in the output into plots that combine graphs, multivariate information and maps of the relationships between covariates and taxa abundances. The layering concept allows the user to provide reproducible publication level figures with multiple heterogeneous sources of information. Our main goal in providing these tools has been to enhance the statistical power of the analyses by enabling the user to combine frequencies, quality scores and covariate information into complete and testable projections.
Summary
This illustration of possible workflows for microbiome data combining trees, networks, normalized read counts and sample information showcases the capabilities and reproducibility of an R based system for analyzing bacterial communities. We have implemented key components in
Once the sequences have been filtered and tagged they can be assembled into a phylogenetic tree directly in R using the maximum likelihood tree estimation available in phangorn. The sequences are then assembled into a phyloseq object containing all the sample covariates, the phylogenetic tree and the sample-taxa contingency table.
These data can then be visualized interactively with Shiny-phyloseq, plotted with one line wrappers in phyloseq and filtered or transformed very easily.
The third component of the paper shows more complex analyses that require direct use of ggplot2 and advanced statistical analyses. This will be of interest to power users with a good working knowledge of R, ggplot2 and statistical learning techniques. We use ggnetwork to plot community networks and perform a permutation test on a categorical response. We show that partial least squares and random forests give very similar quality predictions on this data and show how to plot the resulting proximities. Multivariate ordination methods allow useful lower dimensional projections in the presence of phylogenetic information or multi-domain data as shown in an example combining metabolites and OTU abundances.
Supervised learning methods provide lists of the most relevant taxa in discriminating between groups. To improve the power of the testing techniques designed to identify taxa that are the most changed between two groups of subjects, we provide an optimized variance stabilizing transformation and multiple hypothesis correction using the DESeq2 package. We have also incorporated a more original way of controlling for multiple hypothesis testing at the different levels of the phylogenetic tree through the use of structSSI, a package that implements FDR control for hierarchical structures 26, 27, 30. This package is interactive so we have supplied a snapshot of the output tree.
The last example in the paper shows how to combine data from multiple domains 32: metabolites, taxa counts, genetic data and diet. We illustrate the combination of sparse canonical correlation analysis with PCA to provide a useful triplot projection of the data.
Data availability
Intermediary data for the analyses are made available at the Stanford digital repository permanent url for this paper: http://purl.stanford.edu/wh250nn9648. All other data have been previously published and the links are included in the paper.
Software availability
Bioconductor packages at https://www.bioconductor.org/. CRAN packages at https://cran.r-project.org/.
Permanent repository for the data and program source of this paper: https://purl.stanford.edu/wh250nn9648
Latest source code as at the time of publication: https://github.com/spholmes/F1000_workflow
Archived source as at the time of publication: Zenodo: F1000_workflow: MicrobiomeWorkflowv0.9, doi: 10.5281/zenodo.54544 33
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
Copyright: © 2016 Callahan BJ et al. This work is licensed under the Creative Commons Attribution License (https://creativecommons.org/licenses/by/3.0/) (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
Abstract
High-throughput sequencing of PCR-amplified taxonomic markers (like the 16S rRNA gene) has enabled a new level of analysis of complex bacterial communities known as microbiomes. Many tools exist to quantify and compare abundance levels or microbial composition of communities in different conditions. The sequencing reads have to be denoised and assigned to the closest taxa from a reference database. Common approaches use a notion of 97% similarity and normalize the data by subsampling to equalize library sizes. In this paper, we show that statistical models allow more accurate abundance estimates. By providing a complete workflow in R, we enable the user to do sophisticated downstream statistical analyses, including both parameteric and nonparametric methods. We provide examples of using the R packages dada2, phyloseq, DESeq2, ggplot2 and vegan to filter, visualize and test microbiome data. We also provide examples of supervised analyses using random forests, partial least squares and linear models as well as nonparametric testing using community networks and the ggnetwork package.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer