Preprint reviews by Thomas Wallis

Voluntary control of illusory contour formation

William Harrison, Reuben Rideaux

Review posted on 15th December 2017

This paper presents an investigation of the relationship between voluntary attention and illusory contour perception using a novel stimulus and the classification image technique. The paper is interesting and well-written; I have some minor comments.

- I’m not convinced by line 176. The effect itself is pretty weak / small, and to show that a weak effect disappears is maybe not surprising. Do the BFs at least show evidence supporting no difference?

- I think the red line showing the illusory edge row is confusing - I initially mistook it for the area of pixels you were in fact testing, which of course didn’t make sense. I think it would be better to have something below the edge, spanning the illusory portion.

- Line 197: The authors hypothesise that the illusory star form constrains voluntary interpolation of the illusory triangle edge. They could presumably test this by measuring classification images after rotating the non-target pacmen by 90 degrees (breaking the star but largely preserving local contrast). I think this condition would actually be useful as a baseline for the plots in Figure 2b: how strong could we expect the middle of the contour to be in the absence of the illusory star? For example, the authors could state something like “the presence of the competing illusory form reduces the strength of the illusory contour by 3-fold”. Is there any other data that could speak to this - perhaps Jason Gold’s work?

- what exactly does the SVM fitting add? The pictures are nice to explicitly show the result of two / three potential hypotheses for how to do the task, but then these are not directly tested against the data. Rather it’s left up to the readers’ impression of the classification images and their correspondence to the three models (which is admittedly much more than most classification image studies do). While I do think it’s nice to have those hypothesis images generated from understandable models, I also wonder what value that’s added beyond just sketching those hypotheses by hand. Can the authors think of a way to test those hypotheses against the data more formally?

show less

See response


A functioning model of human time perception

Warrick Roseboom, Zafeirios Fountas, Kyriacos Nikiforou, David Bhowmik, Murray Shanahan, Anil K. Seth

Review posted on 07th August 2017

Interesting idea (disclaimer: I don't work on time perception). I was hoping the authors could clarify a couple of things about the model.

1. How were the parameters in Table 1 (T_max, T_min and tau) obtained? Via hand-tuning or some kind of crossval?

2. How robust are the model's decisions to the particular choice of parameters in Table 1?

3. What happens if the attention network is turned off (i.e. train SVR on raw Euclidean distances)?

4. Have the authors tested the choice of these parameters and the training of the SVR using separate videos to those shown to participants? That is, perform 10-fold crossval over a different set of videos to select hyperparams before testing on the validation set (those videos shown to humans).

5. It might be worthwhile showing that the model is robust to training on different durations than those shown to participants (e.g. 0.5, 1.2, 1.7 sec instead of 1, 1.5, 2) to check that the RBF has not overfit to those timepoints?

Minor points:

- reference to Kriegeskorte appears twice [5 and 16].

- label units in Figure 3 A–E with seconds instead of power units. Tickmarks could be more visible.
- label y-axis of Figure 3E.

Thanks!

show less