Wednesday, 29 September 2010

Automatic for the reaching

ResearchBlogging.orgHurrah, a post! I've been quite busy over the last few weeks so I haven't had much time for reading or writing. However, I am attempting to repent of my slacking ways and I came across this nice little paper investigating an aspect of the automatic pilot process.

Clearly we are not in conscious control of all of our actions at all times. Some reactions – like moving our hand away when we burn ourselves on a hot stove – are instinctive, reflexive. Reflexes themselves are actually a topic of hot debate in motor control these days, and there are people in my lab (including me) doing some interesting work on long-latency reflexes, which have access to some more complex processing power than the more basic, short-latency spinal reflexes that do things such as get the limb out of danger as fast as possible.

One nice example of this automaticity of behaviour is in what’s known as the double-step task. A participant reaches for a target, and during the reach the target ‘jumps’ to a different location. Without really considering what they are doing, the participant changes the reach to aim for the new target. There have been many studies that have explored various aspects of this automaticity, but the paper I’m discussing today asks the question: does stopping the automatic behaviour require more cognitive resources than letting it continue?

To investigate this question, the authors used a standard target jumping double-step task. They also tried to get participants to use up cognitive resources during the reach by giving them an auditory task to do (listen to a string of numbers and identify the pairs) at the same time. Before each trial, the participant was informed as to whether they should follow the target when it jumped (GO trials) or not to follow the target and instead reach to the original position (NOGO). The results are shown in the graph below (Figure 3 in the paper):

Movement corrections based on time from target jump

Here ‘dual task’ means that the participants were performing the reach and the cognitively demanding auditory task at the same time. The graph shows the percent of trials corrected vs. the time from the instant the target jump happens. Thus at 150 ms almost no corrections are seen, whereas after 300 ms a substantial proportion of trials have been corrected for in both conditions. The unsurprising result here is that in the GO trials there are many more corrections than in the NOGO trials overall – after all, participants were instructed to correct in the GO trials and not in the NOGO trials.

The interesting result is in the grey NOGO traces. There are substantially more corrections in the dual task than in the single task, implying that the extra cognitive load imposed in the dual task actually stopped participants from inhibiting their corrections – whereas in the black GO traces the cognitive load has no effect. This result seems to show that it actually does take cognitive effort to stop the automatic correction than letting it continue.

On the face of it this isn’t too shocking a finding. After all, if you are reacting instinctively (or in a way that reflects a high level of training) to a situation, then stopping yourself from acting that way does seem like it should take some mental effort. I know it takes mental effort for me to stop doing something extremely habitual (to quote Terry Pratchett, the strongest force in the universe is force of habit). It’s nice to see a good paper that solidifies this principle.


McIntosh RD, Mulroue A, & Brockmole JR (2010). How automatic is the hand's automatic pilot? Evidence from dual-task studies. Experimental Brain Research, 206 (3), 257-69 PMID: 20820760

Image copyright © Springer-Verlag 2010

Thursday, 2 September 2010

Walking sub-optimally: redux

ResearchBlogging.orgI haven’t done this before but I wanted to revisit the post I made last week about sub-optimal walking in the light of new information. You see, we had a journal club about the paper yesterday in which interesting discussions were had about the paper and the results – and the conclusions drawn from those results.

If you recall, the central thesis of the paper is that we over-correct for deviations in our stride length and stride time that draw us away from the line of constant velocity (the Goal Equivalent Manifold). The evidence for this was a calculation of a parameter labeled α that shows the persistence of a particular variable, i.e. how likely it is to be corrected. This is where the trouble starts.

Unknown to me at the time I wrote the post, the calculation of α only works given a certain set of constraints. For example, imagine that you have a matrix that you wish to invert. (For those who don’t know about matrices: not all matrices are able to be inverted.) So you write a piece of code that inverts the matrix, but in such a way that it never crashes and always returns an answer. Now, if you feed to program a matrix than is non-invertable, it will give you an answer – but that answer doesn’t mean anything. And unfortunately, the calculation of α in this paper has much the same problem.

What this means is that the evidence for the claim the authors are making – that overcorrection is the best way to model human walking variability – is suspect. It’s especially interesting when you look at one of the figures which is used to show that a simpler strategy for treadmill walking, absolute position control (i.e. trying to stay at the same spot on the treadmill). This figure (Figure 4C in the paper) shows the calculation of α for the position on the treadmill:

Persistence for position on treadmill

The value of α here is greater than 1 and goes up to 1.5, so the authors argue that this means there is a high persistence and therefore participants do not correct for absolute treadmill position. But α is undefined over data like this, and it doesn’t go higher than 1! It looks like the problem I outlined above – you get a number out of the program, but the number doesn’t actually mean anything.

So not only might the central claim be undermined, but the contention that we don’t control absolute treadmill position is also questionable. Something to be careful of when looking at papers is always to make sure the methods make sense – I assumed that these methods were adequate for the task they were used for, and apparently so did the reviewers! It is of course possible that the whole thing is fine, but as my colleague Frederic Crevecoeur points out, they could have done a few more tests that demonstrated the validity of these calculations, which would make these points moot.

Regardless of whether the central claim is correct, it is admirable that this is the first paper to really attempt to use stochastic optimal control models to look at walking. Apparently they have more in the works; I look forward to seeing it!


Dingwell JB, John J, & Cusumano JP (2010). Do humans optimally exploit redundancy to control step variability in walking? PLoS computational biology, 6 (7) PMID: 20657664

Image copyright © 2010 Dingwell, John & Cusumano