Review for "D3M: Detection of differential distributions of methylation patterns"

Completed on 25 Aug 2015 by Peter Hickey . Sourced from

Login to endorse this review.

Comments to author

### General comments

Matsui and colleagues propose D^3M, a method for testing differential methylation at a cytosine in a two-group experiment, such as a case/control study.

The majority of existing methods for testing differential methylation are based on a test using a summary statistic of these distributions, such as the mean, median, or variance. Of the currently available methods, the most similar to D^3M is the similarly named M^3D, which uses the maximum mean discrepancy to test whether the distribution of methylation levels across a _region_ are identical in the case and control groups. Like M^3D, D^3M is based on a statistical test of whether the distribution of methylation levels is different between the case and control groups. D^3M, however, focuses on methylation differences individual cytosines rather than across a region.

The D^3M method is well-described and the claims of the method's performance well-supported by the presented results. I believe that D^3M is a valuable contribution to the analysis of differential methylation, particularly in studies where the difference between the cases is in the higher order moments of the methylation distributions.

I thank the authors for making the code and example data available. In order to promote the use of D^3M by the wider community, I strongly encourage the authors to make the method available as an R package or to contribute it to an existing R/Bioconductor package for analysing DNA methylation data.

I did find, however, several points in the paper where I would appreciate clarification. These are described below in the major and minor points for revision. Most of these are to improve the clarity of the paper.

One particular question I have is whether the authors are proposing the use of D^3M for the analysis of both methylation microarrays (e.g., Illumina 450k, as used in their TCGA data analysis) and sequencing-based assays (e.g., whole-genome bisulfite-sequencing and reduced representation bisulfite-sequencing). If the authors believe it is equally applicable to sequencing-based assays then I think it would be appropriate to include such an analysis in the paper (at least the supplementary material, if not in the main text) and to clarify this issue in the text.

### Specific comments

#### Major

- p1: "For example, limma, minfi, edgeR, DESeq, DiffVar detect the differential methylation sites by testing for significant differences in mean and variance". This is the initial source of my confusion as to whether D^3M is designed for microarray-based and/or sequencing-based assays of methylation. Specifically, limma was initially designed for gene expression microarray data and, more recently, can handle sequencing (designed for RNA-seq) data via the voom() method; minfi is designed for methylation microarrays; edgeR and DESeq are both designed for sequencing-based assays (especially RNA-seq) and are not appropriate for microarray-based assays; DiffVar has methods available for both microarray-based and sequencing-based data. Furthermore, to the best of my knowledge, neither edgeR and DESeq have been used for bisulfite-sequencing assays (where one obtains a beta-value), although they may be used, suitably modified, to analyse enrichment-based sequencing assays of DNA methylation (such as methyl-binding ChIP-type assays). I think it is necessary to (1) clarify whether D^3M is designed for microarray-based and/or sequencing-based assays; (2) cite examples where these other methods have been used to analyse comparable data and, if no such examples exist, to otherwise clarify this in the Introduction; (3) If D^3M is applicable to sequencing-based assays, how does sequencing coverage affect this method (I ask this because the authors of M^3D note the need to explicitly account for this in their method).
- p2: I find the description of the permutation procedure used to derive the null distribution to be unclear. Are the vectors x = (x_{1}(s_{i}), ..., x_{n}(s_{i})) and y = (y_{1}(s_{i}), ..., y_{m}(s_{i})) jointly permuted (this is what is looks to me in the supplied code in d3m.R)? Otherwise, if each of x and y are permuted within themselves, I would expect the permuted distributions, \hat{F_{i}^{*}}(x) and \hat{G_{i}^{*}}(y), to be identical across permutations.
- p3: When introducing the methods against which D^3M is compared (DiffVar, KS, Welch, WMM and MMD), I think it would be fairer to explicitly list which hypotheses (case 2-8) that each method is designed to detect. For example, DiffVar is definitely not designed to detect case 4 (difference in mean only), but is explicitly designed to detect case 3 (difference in variance only), as the simulation results bear out. While this point is somewhat addressed in the discussion, the simulation description notes that in each of case 2-8 the hypothesis being tested is whether the two distributions are identical, and many of these methods are designed to test a more restricted hypothesis about distributional moments. It would also aid the interpretation of the results since it makes it easier to check that the various methods are working 'when they should'.
- p3: How is MMD implemented in the simulation analysis, e.g., using the M3D Bioconductor package or via kernlab? If the latter then I think it should be emphasised.
- p3: The original paper describing M^3D (Mayo et al. 2014) states that M^3D is designed for testing differential methylation at pre-defined _regions_ rather than individual cytosines. Does this put M^3D at an unfair disadvantage in the simulation study where only individual cytosines are examined? Relatedly, is D^3M applicable to testing for differentially methylated regions?
- p3: How much does sample size effect the power of these methods, particularly D^3M. The authors note in the discussion that a sample size > 100 is desirable and the simulation and TCGA data analyses use n = O(100). Would it be possible to explore this further in the supplementary material, e.g., n = O(10) (representative of sample sizes being used in whole-genome bisulfite-sequencing and reduced representation bisulfite-sequencing experiments) and n = O(1000) (representative of a large epigenome wide association study)?
- p4: The authors note that there are few sites identified by D^3M, Welch, and DiffVar. They rightfully note that this "[indicates] that the differential methylation sites based on the shapes include distinct information not relevant to Welch and DiffVar" but I wonder what sites are being uniquely detected by DiffVar and/or Welch and whether this indicates any limitations of D^3M.
- p4: In the analysis of the TCGA data, were probes with SNPs removed, e.g., using minfi::dropLociWithSnps()? These probes can otherwise introduce well-known biases and, in particular, may give rise to distributions of beta-values that look very much like case-8 but that are driven by SNPs rather than methylation levels. The authors rightfully note in the Discussion that these type of "measurement error" outliers should be removed prior to analysis. More generally, it would be ideal if an R script was provided that performed the analysis of the TCGA data.
- p4: The significance level of P < 0.01 is rather generous, especially given that there are ~400,000 hypothesis tests. Might these results be better presented with reference to a false discovery rate (e.g. 5% FDR), as is typical when assessing significance in genome-wide studies of methylation and gene expression?
- p5: I have considerable difficulty interpreting Figure 3. This figure could be greatly improved by an expanded and more detailed caption. My understanding is that the upper panel are the data on the 145 GBM samples and the lower panel are the data on the 530 LGG samples. In each panel I see a clustering by sampleID (the horizontal separation of "completely green" on the bottom third from the "green and red" on the top two-thirds). To me this means that in each of GBM and LGG there are two distinct clusters based on these top 1000 sites. However, this doesn't, seem to be the result being conveyed by the authors. I am fairly certain that I have misinterpreted the authors' intentions but I cannot understand this result without further description of the figure in the caption. Could this figure also be redone to avoid the red-green colour scheme to assist those with colour blindness, e.g., see and
- p5: I was curious about the running time of D^3M and so I played around with the scripts and data provided by the authors. On my 2015 Macbook, it took approximately 1.1 seconds to run D^3M on a single site using the data provided in sample.txt. That means it would take approximately 150 hours to run on a dataset with 500,000 sites when using the existing code. I believe that this type of calculation should be included in the paper because readers will want to know how long it would take to run D^3M on a typically sized dataset. There are obvious speed-ups obtainable by a parallel version of D^3M since it is an embarrassingly parallel computation across loci and this might be mentioned by the authors.

#### Minor

- General: The authors consistently refer to methylation "patterns" (e.g., in the paper's title). I think that to many people studying DNA methylation a methylation "pattern" refers to the string of methylation states along a single DNA fragment, e.g., like those shown in a methylation 'lollipop' plot ( What is being analysed by D^3M, and the competing methods, are methylation "levels", such as beta-values, rather than methylation "patterns" - if the authors agree with my assessment then I would suggest a change in terminology to avoid confusion.
- p1: " which a methyl group is attached to a carbon cytosine (C) base". I think this wording is slightly imprecise; the methyl group is typically added to the 5-carbon __of a__ cytosine (C) base.
- p1: "The methylation of promoter region, in particular, silences cancer suppressor genes". This would benefit from an appropriate reference(s).
- p2: When beta-values are first mentioned it would be helpful to define these.
- p2: Equation (5) has typesetting glitch at the end of the line.
- p2/p3: "Since the method for detection of methylation, which is based on distance, cannot distinguish the 'direction' of the hyper- or hypo-methlation." This sentence is incomplete and does not make sense.
- p3: The definition of a simulated "dataset"; is a dataset the data simulated for a single cytosine in the cases and controls?
- p3: The description of MMD as "[it] cannot control type I error at both of the levels of 5% and 1%, i.e. the significance level actually fails" I find to be confusing. The results presented in Table 2 show that MMD is identically 0.00 for 'case 1'. While I agree that this means that MMD does not achieve the nominal Type I error rate, I would describe MMD as being conservative rather than it not controlling the type I error rate (which is how I would describe the situation if the results for MMD were >> 0.05). Perhaps this is just a difference in terminology, but I was initially surprised/confused when trying to reconcile the main text with the results presented in Table 2.
- p4: In table 2 the nominal size (alpha) is given as a decimal value in [0, 1] but the reported Type I error rates (case 1) and power (case 2-8) are given as percentages [0, 100]. This seems unnecessarily confusing.
- p4: The pcaMethods package provides several functions for imputation of missing data; it would be good to provide further details of what was used in the analysis of the TCGA data.
- Supplementary material: A minor latex error; the references to equation (1) and (2) are rendered as "(??)".
- General: "MissMethyl" should be "missMethyl"