Review for "What the Success of Brain Imaging Implies about the Neural Code"

Completed on 25 Aug 2016 by Russell Poldrack . Sourced from http://biorxiv.org/content/early/2016/08/23/071076.

Login to endorse this review.


Comments to author

This is a really interesting and thoughtful piece that will be important reading for anyone in the field of cognitive neuroscience. I have a few comments that I hope will help make it clearer and more accurate.

- "BOLD response may spillover 3 to 5 millimetres away from neural activity because the brain supplies blood to adjacent areas — it “water[s] the entire garden for the sake of one thirsty flower”" - This is mixing together a couple of issues. It's true that the hemodynamic response is broader than the neuronal activation, but not by 3-5 mm. The “flower-watering” effect is probably on the order of hundreds of microns. The substantial spread in standard (i.e. 3T gradient-echo BOLD) fMRI is due primarily to the fact that this imaging technique has substantial contributions from venous signals that can spread fairly far from the neuronal activation.

- "Extraneous to the actual imaging itself, most statical models require some spatial smoothing in addition to the smoothing that is intrinsic to fMRI data acquisition.” - misspelling of statistical. also, I would disagree with this claim - it is increasingly common to analyze data without any smoothing, especially when one is not relying upon Gaussian random field theory.

- "Neural similarity is not recoverable by fMRI under a burstiness coding scheme.” - this seems to rely on the strong assumption that burstiness is just like regular firing, only with a different temporal organization. this is far outside my knowledge base, but I can imagine that differences in the synaptic physiology of bursting vs. constant firing might be evident from BOLD. Also, see this regarding synchrony: http://www.mitpressjournals.or...

- The general conclusions seem to rest heavily on the specific deep networks used in this analysis, which are trained on the categorization problem. Thus, it’s not surprising that the high-level representations show less overlap between categories - the training has worked! However, it’s not clear to me how well categorization training approximates what mammals learn as they come to perceive the world. It would be useful to have additional discussion regarding the impact of this particuclar topic on the generality of the conclusions.