Preprint reviews by Krzysztof Jacek Gorgolewski

Beyond Consensus: Embracing Heterogeneity in Neuroimaging Meta-Analysis

Gia H. Ngo, Simon B. Eickhoff, Peter T. Fox, R. Nathan Spreng, B. T. Thomas Yeo

Review posted on 04th July 2017

In the manuscript “Beyond Consensus: Embracing Heterogeneity in Neuroimaging Meta-Analysis” Ngo et al. apply a previously published variant of the author-topic model to two new sets of labeled data: peak coordinates aggregated from three previously published meta-analyses somehow related to “self-generated thoughts” and a subset peak coordinates from studies overlapping with the IFG.


Even though I found the manuscript to be interesting and the presented application intriguing, the overall feeling it left me with was of an “identity crisis”.
On one hand, the reader might think the paper is proposing a new method. This would be suggested by the general nature of the title and the fact that the two example applications have very little to do with each other (cognitively or neuroanatomically). However, authors clearly state that all of the methods used in the paper (the vanilla author-topic model, the coordinate author-topic adaptation, and finally the variational Bayes estimation method) were already presented in previously published papers. What is more the paper lacks the usual parts present in a methods paper: null simulations/permutations, out of sample prediction, comparison with existing methods etc.

On the other hand, the paper might appear as reporting new cognitive finding. This perspective is also murky. There is no clear statement of hypotheses and combination of studying “self-generated thoughts” and IFG is not justified in the manuscript. Furthermore, details such as the inclusion criteria for the “self-generated through” analysis are not included.

To add to the confusion the manuscript includes 11 pages of mathematical derivations that authors themselves suggest should’ve been supplementary materials for their PRNI paper.

I propose two directions to improve the manuscript:

- Route 1: Turn the paper into a full-fledged methods paper. This will require investigating how the model performs when presented with realistic noise (null simulations or permutations), looking at out of sample predictions and evaluating the amount of variance explained. Other ideas include comparison with other factor decomposition methods (PCA, ICA) that do not take into account labels as well as comparing the maps obtained with a “meta-analysis” subset of coordinates to maps obtained from a model using the full BrainMap database in the previous paper. For this approach, it might be beneficial to pick a brain region that has been previously evaluated using similar methods (for example looking at insula and comparing with this paper https://academic.oup.com/cercor/article/23/3/739/317372/Decoding-the-Role-of-the-Insula-in-Human-Cognition). This would allow to contrast an compare different approaches and highlight the advantages of the author-topic mapping.

- Route 2: Focus on cognitive findings. This would require splitting the two analyses into two manuscripts and focus more on the cognitive implications of the findings. If hypotheses about the resulting maps exist they should be clearly stated – if not the exploratory nature should be noted. Interpretation (reverse inference) of the output maps can be improved by using the neurosynth cognitive decoder. Inclusion criteria for the meta-analysis need to be elucidated in more detail.

Other comments:

- I have performed a very simple reanalysis of the data used for the “self generated though” meta-analysis. Taking average activation maps from the 7 categories (navigation, autobiographic memory, ToM story, ToM non-story, narrative comprehension, and task deactivation) and running ICA on it gave me two components that were very similar spatially to the ones presented in the paper (one for navigation and one for everything else - see https://gist.github.com/chrisfilo/0722b520bc56da8c55aa6bba22eb85aa). This begs the question if the more complex author topic model was necessary? What advantages does it provide? Is it more interpretable? More “accurate”? Those issues should be discussed in the paper. This insight into the manuscript was only possible because authors decided to share the data (at least for half of their analyses) for which they should be applauded.

- When describing the author-topic model I would recommend putting “authors” in inverted quotes when referring to an entity in the original model rather than researchers authoring a paper. This should minimize confusion.

- Selection criteria for the meta-analyses and the individual studies have not been clearly defined for the “Self generated thought section”. For example why where studies labelled as “navigation" included? This needs to be justified since the selection of studies going into the model can greatly influence the end result.

- Not all studies used in the two example meta-analyses were cited in the paper. Citations are the important way of showing academic credit – all of the studies used in the paper should be appropriately credited. It is unusual for a paper to cite that many studies, but the work you are doing is cutting edge and require unusual means to accommodate appropriated credit dissemination.

- Only left Inferior Frontal gyrus is investigated – this should be a) justified b) made explicit each time IFG is mentioned in the abstract, methods and discussion.

- Please add L/R labels to all brain figures.

- “Reading” is listed twice in Table S1.

- The fact that perfoming the meta-analytic connectivity analysis requires a collaborative agreement with the BrainMap team (and thus the inclusion of a member of the brainmap project as a collaboration) should be explicitly mentioned in the discussion. Unfortunately, limited accessibility to this dataset is a limitation of the presented method. Alternatively, the authors might explore using other more open labelled coordinate datasets such as the neurosynth dataset.

- Please add a more thorough description of what code and data are available on GitHub.

- Sharing of the estimated spatial component maps. To improve transparency and reusability of the results presented in your paper please share the unthresholded spatial maps of the estimated components on ANIMA, BALSA, or NeuroVault (the last will make comparing them to other spatial maps such as Smith 2009 very easy).

Minor comments (aka pet peeves):

- Visualizations use cluster size and cluster-forming threshold. This might (or might not) be obscuring the true pattern. Presenting unthresholded pattern would be more accurate.

- The use of the jet color-map is unfortunate. It imposes unnatural contrast between some ranges of values thus introducing another level of perceptual thresholding. Using a luminescence calibrated colormap such as perula will improve interpretability of your figures.

I applaud the authors for sharing code and data. The only gripe I have is that I wish it was done before the manuscript was submitted for review. The same way one would never submit a manuscript with a missing figure the same way we should try not to submit papers with placeholder links to code and data.

Finally, I was not able to fully evaluate the mathematical derivations in the appendix. I hope another volunteer reviewer will be able to verify their accuracy.

I am looking forward to reviewing a revised version of the manuscript.

Chris Gorgolewski

show less


Advances in studying brain morphology: The benefits of open-access data

Christopher R Madan

Review posted on 11th June 2017

This short commentary introduces the reader to recent advancements in publicly available neuroimaging datasets with a special focus on data useful for evaluating anatomical features. It includes a brief historical perspective and provides an overview of the advantages of using publicly shared data.


The paper provides some unique and important points – for example, the fact that polling data from multiple sources gives researchers an opportunity to access previously inaccessible populations thus extending conclusions beyond the typically studied groups of participants. Even though all of the statements in the manuscripts are to my knowledge factually accurate I do find it a bit one sided. Advantages of using shared data are presented extensively, but disadvantages are almost completely omitted. It would be worth discussing aspects such as – 1) scientific questions are limited, by what data and metadata are available; 2) when combining data from multiple sites special care needs to be taken to account for scanner/sequence effects etc. I feel that addition would make this community more leveled.

show less


The GridCAT: A toolbox for automated analysis of human grid cell codes in fMRI

Matthias Stangl , Jonathan Shine and Thomas Wolbers

Review posted on 07th February 2017

GridCAT is a much-appreciated attempt to provide computational tools for modeling grid-like patterns in fMRI data. I am by no means an expert in grid cells, but I can provide advice and recommendations with regards to brain imaging software:


- Please mention the license the software is distributed under.

- Please mention the license the data is distributed under. To maximize the impact of this example dataset (fostering future comparisons and benchmarks) I would recommend distributing this dataset under public domain license (CC0 or PDDL) and putting it on openfmri.org

- I was, unfortunately, unable to run your software because I do not possess a valid MATLAB license. This costly dependency will most likely be the biggest limitation of your tool. There are two way to deal with this problem: make it compatible with Octave (free MATLAB alternative) or provide a standalone MATLAB Runtime executable (see https://www.mathworks.com/products/compiler/mcr.html)

- I would encourage the authors to add support for input event text files formatted according to the Brain Imaging Data Structure standard (see http://bids.neuroimaging.io/bids_spec1.0.0.pdf section 8.5 and Gorgolewski et al. 2016)

- Please describe in the paper how other developers can contribute to your toolbox. I recommend putting it on GitHub and using the excellent Pull Request functionality.

- Please describe in the paper how users can report errors and feature requests. I again would recommend using GitHub or neurostars.org.

- Is there a programmatic API built in your toolbox? In other words a set of functions that would allow advanced users to script their analyses. If so please describe it and provide an example.

- Please describe how you approached testing when writing the code. Is there any form of automatic tests (unit, smoke or integration tests)? Are you using continuous integration service to monitor the integrity of your code?

- For the GLM1 modeling step: is it possible to provide nuisance regressors (for example motion)? If so are you reporting information about colinearity of the fitted model?

- For the ROI feature - it would be useful to show users the location of their ROI on top of the BOLD data. This would provide a sanity check that can avoid using masks that are not properly coregistered.

- It would be beneficial for the paper to include some figures of the GUI from the manual and maybe list the plethora of different analysis option available on different steps in a table.

- Please add error bars to figure 5.

Chris Gorgolewski

show less


FAST Adaptive Smoothing and Thresholding for Improved Activation Detection in Low-Signal fMRI

Israel Almodóvar-Rivera and Ranjan Maitra

Review posted on 06th February 2017

Authors present and appealing methodological improvement on the Adaptive Segmentation (AS) method. The main improvement is alleviating the need to set input parameters (bandwidths sequence). Those parameters are fitted from the data in an optimal way.


Even though the paper has the potential to be a meaningful contribution to the field it lacks thorough comparison with the state of the art. The following steps to improve the situation should be considered:

- The selection of pattern used in the simulation seems to be motivated by the nature of fMRI data which is good, but at the same time, it does not highlight the specific issues that FAST is solving. Have a look at the simulations included in Polzehl et al. 2010 showing how smoothing across neighboring positive and negative activation areas can cancel the effect out. It would be beneficial to construct simulations that highlight the specific situation in which FAST overcomes the limitations of AS.

- Neuroimaging is strongly leaning towards permutation based testing methods due to the reduced number of assumptions. I would recommend adding cluster and voxel based permutation based inferences to your analysis. Please mind that permutation based testing is not the same as finding cluster cut offs via simulations.

- I would also recommend adding threshold-free cluster enhancement (Smith and Nichols 2009) to the set of compared methods. It is also a multiscale method that has been successfully used in many studies. This method also works best in comparison with permutation tests.

- It would be good to assess the rate of false positive findings in your comparison. This could be done by applying a random boxcar model to resting state data and evaluating how many spurious activations you find (see Eklund et al. 2012).

- Speaking of false positive and false negative voxels. It seems that the evaluation of your method in the context of the state of the art presented in Figure 4 is very sensitive to the threshold (alpha level) chosen for each method. I would suspect that AS and CT would perform better if a different alpha level was chosen. To measure the ability to detect signal more accurately I would recommend varying the alpha level to create a receiver operator curve (based on false positive and false negative voxels rather than Jaccard overlap) and calculate the area under it.

Minor:
- In the figure, you use the TP11 acronym to denote the adaptive segmentation algorithm, but in the rest of the paper, you use AS. It would be good to normalize this.

Chris Gorgolewski

show less