Wednesday, 29 September 2010

Automatic for the reaching

ResearchBlogging.orgHurrah, a post! I've been quite busy over the last few weeks so I haven't had much time for reading or writing. However, I am attempting to repent of my slacking ways and I came across this nice little paper investigating an aspect of the automatic pilot process.

Clearly we are not in conscious control of all of our actions at all times. Some reactions – like moving our hand away when we burn ourselves on a hot stove – are instinctive, reflexive. Reflexes themselves are actually a topic of hot debate in motor control these days, and there are people in my lab (including me) doing some interesting work on long-latency reflexes, which have access to some more complex processing power than the more basic, short-latency spinal reflexes that do things such as get the limb out of danger as fast as possible.

One nice example of this automaticity of behaviour is in what’s known as the double-step task. A participant reaches for a target, and during the reach the target ‘jumps’ to a different location. Without really considering what they are doing, the participant changes the reach to aim for the new target. There have been many studies that have explored various aspects of this automaticity, but the paper I’m discussing today asks the question: does stopping the automatic behaviour require more cognitive resources than letting it continue?

To investigate this question, the authors used a standard target jumping double-step task. They also tried to get participants to use up cognitive resources during the reach by giving them an auditory task to do (listen to a string of numbers and identify the pairs) at the same time. Before each trial, the participant was informed as to whether they should follow the target when it jumped (GO trials) or not to follow the target and instead reach to the original position (NOGO). The results are shown in the graph below (Figure 3 in the paper):

Movement corrections based on time from target jump

Here ‘dual task’ means that the participants were performing the reach and the cognitively demanding auditory task at the same time. The graph shows the percent of trials corrected vs. the time from the instant the target jump happens. Thus at 150 ms almost no corrections are seen, whereas after 300 ms a substantial proportion of trials have been corrected for in both conditions. The unsurprising result here is that in the GO trials there are many more corrections than in the NOGO trials overall – after all, participants were instructed to correct in the GO trials and not in the NOGO trials.

The interesting result is in the grey NOGO traces. There are substantially more corrections in the dual task than in the single task, implying that the extra cognitive load imposed in the dual task actually stopped participants from inhibiting their corrections – whereas in the black GO traces the cognitive load has no effect. This result seems to show that it actually does take cognitive effort to stop the automatic correction than letting it continue.

On the face of it this isn’t too shocking a finding. After all, if you are reacting instinctively (or in a way that reflects a high level of training) to a situation, then stopping yourself from acting that way does seem like it should take some mental effort. I know it takes mental effort for me to stop doing something extremely habitual (to quote Terry Pratchett, the strongest force in the universe is force of habit). It’s nice to see a good paper that solidifies this principle.


McIntosh RD, Mulroue A, & Brockmole JR (2010). How automatic is the hand's automatic pilot? Evidence from dual-task studies. Experimental Brain Research, 206 (3), 257-69 PMID: 20820760

Image copyright © Springer-Verlag 2010

Thursday, 2 September 2010

Walking sub-optimally: redux

ResearchBlogging.orgI haven’t done this before but I wanted to revisit the post I made last week about sub-optimal walking in the light of new information. You see, we had a journal club about the paper yesterday in which interesting discussions were had about the paper and the results – and the conclusions drawn from those results.

If you recall, the central thesis of the paper is that we over-correct for deviations in our stride length and stride time that draw us away from the line of constant velocity (the Goal Equivalent Manifold). The evidence for this was a calculation of a parameter labeled α that shows the persistence of a particular variable, i.e. how likely it is to be corrected. This is where the trouble starts.

Unknown to me at the time I wrote the post, the calculation of α only works given a certain set of constraints. For example, imagine that you have a matrix that you wish to invert. (For those who don’t know about matrices: not all matrices are able to be inverted.) So you write a piece of code that inverts the matrix, but in such a way that it never crashes and always returns an answer. Now, if you feed to program a matrix than is non-invertable, it will give you an answer – but that answer doesn’t mean anything. And unfortunately, the calculation of α in this paper has much the same problem.

What this means is that the evidence for the claim the authors are making – that overcorrection is the best way to model human walking variability – is suspect. It’s especially interesting when you look at one of the figures which is used to show that a simpler strategy for treadmill walking, absolute position control (i.e. trying to stay at the same spot on the treadmill). This figure (Figure 4C in the paper) shows the calculation of α for the position on the treadmill:

Persistence for position on treadmill

The value of α here is greater than 1 and goes up to 1.5, so the authors argue that this means there is a high persistence and therefore participants do not correct for absolute treadmill position. But α is undefined over data like this, and it doesn’t go higher than 1! It looks like the problem I outlined above – you get a number out of the program, but the number doesn’t actually mean anything.

So not only might the central claim be undermined, but the contention that we don’t control absolute treadmill position is also questionable. Something to be careful of when looking at papers is always to make sure the methods make sense – I assumed that these methods were adequate for the task they were used for, and apparently so did the reviewers! It is of course possible that the whole thing is fine, but as my colleague Frederic Crevecoeur points out, they could have done a few more tests that demonstrated the validity of these calculations, which would make these points moot.

Regardless of whether the central claim is correct, it is admirable that this is the first paper to really attempt to use stochastic optimal control models to look at walking. Apparently they have more in the works; I look forward to seeing it!


Dingwell JB, John J, & Cusumano JP (2010). Do humans optimally exploit redundancy to control step variability in walking? PLoS computational biology, 6 (7) PMID: 20657664

Image copyright © 2010 Dingwell, John & Cusumano

Monday, 30 August 2010

A stimulating time

ResearchBlogging.orgWhile I don’t usually post about clinical work, sometimes a paper just leaps out at me and makes me go, “hmm, that’s interesting!” So it was with this study, which explores the medium-term effects (over five years) of chronic deep-brain stimulation in Parkinson’s disease (PD). I’m by no means a clinician or an expert on PD so I’m very keen to make sure the information in here is correct. Please leave me useful comments if it isn’t!

Parkinson’s disease is a neurodegenerative disease affecting the motor system. It’s characterised by several symptoms, with the one most people connect to Parkinson’s being a persistent awake resting tremor that disappears with voluntary movement and sleep. Other symptoms include increased rigidity, slow movements (bradykinesia) and postural instability. There are also often substantial cognitive impairments as the disease progresses. The symptoms appear to be caused by the death of cells in the basal ganglia that produce the neurotransmitter dopamine. The reason for this cell death is still not understood.

Treatment is available for Parkinson’s, most commonly in the form of L-DOPA, a drug that at first replenishes the amount of dopamine in the system and thus relieves symptoms somewhat. It does have side-effects and becomes less effective over time however, and other drugs are also used to control the symptoms. Relatively recently, deep brain stimulation (DBS) has come to the fore as an effective treatment, especially when drugs aren’t working. The idea is that an electrode is inserted deep into the brain and areas of the basal ganglia are stimulated with electrical impulses to regulate their output, reducing the symptoms.

Because DBS is still quite new, we don’t really know what its long-term effects are. Short-term the effects are spectacular; see this video of a patient with and without his DBS system switched on. It’s quite dramatic (he turns it off at about 1:25):

But what about in the medium to long term? In the paper I discuss today, the researchers followed up eight patients after five years of DBS to see whether there was any effect on either their clinical symptoms or measures of motor performance. Over these five years the patients received continuous DBS and also a drug regimen, adjusted to control their symptoms as required. Symptoms were measured using the Unified Parkinson’s Disease Rating Scale (UPDRS) and the motor kinematics measured were ankle movement speed and strength. Prior to testing, patients stopped taking drugs and turned off their stimulators for 12 hours overnight.

The experimenters tested the patients both with DBS turned on and off at the start of the experiment (year 0) and again five years later (year 5). Their main findings were that, as expected, DBS reduced symptoms and improved movement speed and strength overall – both at year 0 and year 5. When comparing the two time periods however, they found an interesting result. UPDRS scores increased over five years, i.e. symptoms got worse, but the speed and strength of the ankle movement actually improved. So it looks like DBS gave no long-term improvement on the UPDRS scores but did produce an improvement in mobility and strength.

How can this apparent contradiction be explained? Well, first it’s quite difficult to say what would have happened without DBS over five years, as there was no adequate control group in this study. As Parkinsonism is a degenerative disease, the UPDRS scores would most certainly have got worse over five years anyway. But being able to say for sure whether the DBS reduced this worsening of symptoms is very hard to say. The researchers have a go at saying why this measure didn’t improve versus the other motor measures improving though: the UPDRS measurement involves repetitive movements like finger tapping, which the basal ganglia are heavily involved in; whereas the ankle movements tested for strength and speed are discrete movements that don’t really need coordinated muscle output over time – so they aren’t as regulated by the basal ganglia and therefore aren’t as affected by Parkinson’s.

There’s also the possibility that DBS increases dopamine production (aside from just regulating the output of the basal ganglia) and that this actually increases motivation and “energise” action, which is known to improve muscle strength. Also, if DBS improves quality of life and makes patients more active, their muscle strength and speed will change purely as a result of using their muscles more.

So there’s quite a lot here for a relatively short paper. The most interesting point from a basic science perspective I think is the contention that the basal ganglia don’t really have much to do with large discrete movements, which is why the symptom scores get worse (as they’re based on repetitive movements). It’s certainly plausible, though I’d be wary of reading too much into it.

From a clinical point of view though I guess the most interesting finding is that sustained DBS does improve motor outcomes over the medium term. But a weakness of this work is that without, as I say, an adequate non-stimulated control group, it’s very difficult to say whether it has any effect on the UPDRS scores differential to if the patients were not stimulated. Of course, there are ethical issues with not giving people the best treatment currently available just so you can test how they compare to people who are.


Sturman MM, Vaillancourt DE, Verhagen Metman L, Bakay RA, & Corcos DM (2010). Effects of five years of chronic STN stimulation on muscle strength and movement speed. Experimental Brain Research, 205 (4), 435-43 PMID: 20697699

Friday, 27 August 2010

Walking sub-optimally is the way forward

ResearchBlogging.orgToday we’re going to do something a little different. I’ve been posting a lot about reaching movements, because that’s what I’m most interested in, but it may surprise you to learn that humans do actually have the capacity to move other parts of their bodies as well. I know, I’m as shocked as you are… so! The paper I’m going to cover is about the regulation of step variability in walking. It’s a little longer and more complex than normal, so strap yourselves in.

Walking is a hard problem, and we’re not really sure how we do it. Like reaching, there are many muscles to coordinate in order to make a step forward. Unlike in arm reaching, these coordinated steps need to follow one another cyclically in such a way as to keep the body stable and upright while simultaneously moving it over terrain that might well be rough and uneven. Just think for a moment about how difficult that is, and what different processes might be involved in the control of such movements.

One question that remains unanswered is how we control variability in walking. It’s a simple matter to control average position or velocity, but the variation in these parameters between steps is still unexplained. It is pretty well established that over the long-term people tend to try to minimize energy costs while walking – hence the gait we learn to adopt over the first few years of life. But there’s evidence that such a seemingly “optimal” strategy is not the whole story.

Consider walking on a treadmill. What’s the primary goal of continuous treadmill walking? Well, it’s to not fall off. The researchers in the article took that idea and reasoned that because the treadmill is moving at a constant speed, the best way not to fall off is to move at a constant speed yourself. That’s not the only strategy of course – you could also do something a little more complicated like make some short, quick steps followed by some long, slow ones in sequence, which would also keep you on the treadmill.

To test how the parameters varied, the researchers used five different walking speeds. You can see this in the figure below (Figure 3 in the paper):

Human treadmill walking data with speed as percentage of preferred walking speed (PWS)

L is stride length, T is stride time and S is stride speed. So A-C in the figure show how these values change with the five different treadmill speeds – length increases, time decreases and speed increases. D-F show the variability (σ) in these different parameters. G-I show something slightly more complex: a value called α that is defined as a measure of persistence, i.e. how much or little the parameters were corrected on subsequent strides. Values of α > ½ mean that there was less correction, whereas values < ½ mean that there was more correction. So panels G-I show that variability in stride length and time were not generally corrected quickly, but that variations in stride speed were.

Read that last paragraph through again to make sure you get it. It will be important shortly!

So: now we have a measure of human walking parameters. The question is, how are these parameters produced by the motor control system? That is, what does the system care about when it initiates and monitors walking? Well, one thing we can get from the data here is that the system seems to care about stride speed, but doesn’t care about stride time and stride length individually. And if that’s the case, then as long as the coupled length and time lie on a line that defines the speed, the system should be happy. A line a bit like this (figure 2B in the paper):

Human stride parameters lie along line of constant speed

The figure shows the GEM (which stands for Goal Equivalent Manifold, essentially the line of constant speed) plotted against stride time and stride length. The red dots show some data. Right away you can see that the dots generally lie along the line. Ignore the green arrows, but do take note of the blue ones – they’re showing a measure of deviations tangent to (δT) and perpendicular to (δP) the line. Why is δT so much bigger than δP? Because perpendicular variations push you off the line and thus interfere with the goal, whereas tangential variations don’t. So the system is either not stepping off the line much in the first place or correcting heavily when it does.

Here’s one more figure (Figure 5C and D in the paper) showing the variability (σ) and persistence (α) for δT and δP :

Variability and persistence of deviations

You can see that δT is much more variable than δP, as you might expect from the shape of the data shown in the second figure. You can also see something else, however: the persistence for δP is less than ½, whereas the persistence for δT is greater than ½. Thus, the system cares very much about correcting not just stride speed but the combination of stride time and stride length that take the stride speed away from the goal speed.

Great, you may think, a lot of funny numbers to tell us that the system cares about maintaining a constant speed when it’s trying to maintain a constant speed! What do you scientists get paid for anyway? The cool thing about this paper is that the researchers are trying to figure out precisely how the brain produces these numbers. It turns out that if you just use an ‘optimal’ model that corrects for δP while ignoring δT, you don’t get the same numbers. So that can’t be it. How about if you specify in your model that you have to keep at a certain speed – say the same average speed as in the human data? That doesn’t work either. The numbers are better, but they’re not right.

The solution that seems to work best is when the deviations off the GEM line (i.e. δP) are overcorrected for. This controller is sub-optimal, so basically efficiency is being sacrificed for tight control over this parameter. Thus, humans don’t appear to simply minimize energy loss – they also perform more complex corrections depending on the task goal.

I’ve covered in a previous post the inkling that this might be the case; while we do tend to minimize energy over the long term, in the short term the optimization process is much more centred around the particular goal, and people are very good at exploiting the inherent variability in the motor system to perform the task more easily. This paper does a great job of testing these hypotheses and providing models to explain how this might happen. What I’d be interested to see in the future is an explanation of why the system is set up to overcorrect like that in the first place – is it overall a more efficient way of producing movement than just a standard optimization over all parameters? Time, perhaps, will tell.


Dingwell JB, John J, & Cusumano JP (2010). Do humans optimally exploit redundancy to control step variability in walking? PLoS computational biology, 6 (7) PMID: 20657664

Images copyright © 2010 Dingwell, John & Cusumano

Monday, 23 August 2010

Learning without thinking

ResearchBlogging.orgScratching around on the internet this afternoon on my first day back from holiday, I was kind of reluctant to dive straight back into taking papers apart. After all, I have spent the majority of the last three weeks drinking beer and eating pies in the UK, and the increase in my waistline has most likely been mirrored by the decrease in my critical faculties (as happens when you spend time away from the cutting edge). However, I ran across a really cool little article that reminded me just why I enjoy all this motor control stuff. So here goes nothing!

There’s been some work in recent years on the differences between implicit and explicit motor learning – that is, the kind of learning the brain does by itself, relying on cues from the environment, vs. using a well-defined strategy to perform a task. For example, learning to carry a full glass of water without spilling by just doing it and getting it wrong a lot until you implicitly work out how, or by explicitly telling yourself, “Ok, I’m going to try to keep the water as level as possible.” A fun little study on this was performed by Mazzoni and Krakauer (2006) in which they showed that giving their participants an explicit strategy in a visuomotor rotation task (reaching to a target where the reach is rotated) actually hurt their performance. Essentially they started off being able to perform the task well using the explicit strategy, which was something like ‘aim for the target to the left of the one you need to hit’. However as the task went on the implicit system doggedly learned it – and conflicted with the explicit strategy – so that the participants were making more errors at the end than at the beginning.

The paper I’m looking at today follows up on this result. Implicit error-based learning is thought to be the province of the cerebellum, the primitive, walnut-shaped bit at the back of the brain. The researchers hit upon the idea that if the cerebellum is important for implicit learning, then perhaps patients with cerebellar impairments would actually find it easier to perform the task relative to healthy control participants. To test this, they told both sets of participants to use an explicit strategy in a visuomotor rotation task, just like in the previous study, and measured their ‘drift’ from the ideal reaching movement.

Below you can see the results (Figure 2A in the paper):

Target error across movements

Open circles are all control participants, whereas filled circles are all patients. The black circles at the start show baseline performance – both groups performed pretty well and similarly. Red circles show the first couple of movements after the rotation was applied, and before participants were told to use the strategy. You can see that the participants are reaching completely the wrong way. The blue section shows reaching while using the strategy. Here’s the nice bit: the cerebellar patients are doing better than the controls, as their error is closer to zero, whereas the controls are steadily drifting away from the intended target. Magenta shows when the participants are asked to stop using the strategy and the final cyan markers show the ‘washout’ phase as both groups get back to baseline without an imposed rotation – though the patients manage much more quickly than the controls.

So it looks very much like the cerebellar patients, because their cerebellums are impaired at implicit learning, are able to perform this task better than healthy people. What’s kind of interesting is that other research has shown that cerebellar patients aren’t very good at forming explicit strategies on their own, which is something that healthy people do without even thinking about it. The tentative conclusion of the researchers is that it’s not so much that the implicit and explicit systems are completely separate, but that the implicit system can inform the development of explicit strategies – which is impaired if the cerebellum isn’t working properly.

I didn’t like everything in this paper. I was particularly frustrated with the method section, which talks about the kind of screen they used. I wasn’t sure whether the images shown to participants were on a screen in front of them or whether the screen was placed over the workspace in a virtual-reality setup; it was unclear. There was also a sentence claiming that the cerebellar patients’ performance was ‘less’ than the controls, when in fact it was better. Other than these minor niggles though, it’s a really nice paper showing a very cool effect.


Taylor JA, Klemfuss NM, & Ivry RB (2010). An Explicit Strategy Prevails When the Cerebellum Fails to Compute Movement Errors Cerebellum PMID: 20697860

Images copyright © 2010 Taylor, Klemfuss & Ivry

Wednesday, 4 August 2010


I'm about to head to the UK for a couple of weeks to visit the Edinburgh Festival, attend (and sing at!) a wedding, and see my friends and family. So the blog will be on hiatus for a little while. I might try to get a couple of papers read while I'm away, but don't count on it.

I'll be back the weekend of August 21st - expect to see more motor control discussion around then.

Tuesday, 27 July 2010

The noisy brain

ResearchBlogging.orgNoise is a funny word. When we think of it in the context of everyday life, we tend to focus on distracting background sounds. Distracting from what? Usually whatever we’re doing at the time, whether it’s having a conversation or watching TV. In most cases, what we’re trying to do is interpret some signal – like speech – that’s corrupted by background noise. Neurons in the brain have also often been thought of as sending signals corrupted by noise, which seems to make intuitive sense. But that’s not quite the whole story.

The very basics: neurons ‘fire’ and send signals to one another in the form of action potentials, which can be recorded as ‘spikes’ in their voltage. So when a neuron fires, we call that a spike. The spiking activity of neurons has an inherent variability, i.e. neurons won’t always fire in the same situations each time, probably due to confounding inputs from metabolic and external inputs (like sensory information and movement). In other words, the signal is transmitted with some background ‘noise’. What’s kind of interesting about this paper (and others) is that variability in the neural system is starting to be thought of as part of the signal itself, rather than an inherently corrupting influence on it.

Today we delve back into the depths of neural recording with a study that investigates trial-to-trial variability during motor learning. That is: how does the variability of neurons change as learning progresses, and what can this tell us about the neural mechanisms? This paper gets a bit technical, so hang on to your hats.

One important measure used in the paper is something called the Fano Factor. The variability in neuronal spiking is dependent on the underlying spiking rate, i.e. as the amount of spiking increases, so does the variability; this is known as signal-dependent noise. This effect means that we can’t just look at the variability in the spiking activity – we actually have to modify it based on the average spiking activity. The Fano Factor (FF) does precisely this (you can check it out at the Wiki link above if you like). It’s basically just another way of saying ‘variability’ – I mention it only because it’s necessary to understand the results of the experiment!

Ok, enough rambling. What did the researchers do? They trained a couple of monkeys on a reaching task where they had to learn a 90° visual rotation, i.e. they had to learn to reach to the right to hit a target in front of them. While learning, their brain activity was recorded and the variability was analysed in two time periods: before the movement, termed ‘preparatory activity’ and during the movement onset, termed ‘movement-related activity’. Neurons were recorded from the primary motor cortex, which is responsible for sending motor commands to the muscles, and the supplementary motor area, which is a pre-motor area. In the figure below, you can see some results from motor cortex (Figure 2 A-C in the paper):

Neural variability and error over time

Panel B shows the learning rate of monkeys W (black) and X (grey) – as the task goes on, the error decreases, as expected. Note that monkey W is a faster learner than monkey X. Now look at panel A. You can see that in the preparatory time period (left) variability increases as the errors reduce for each monkey – it happens first in monkey W and then in monkey X. In the movement-related time period (right) there’s no increase in variability. Panel C just shows the overall difference in variability in motor cortex on the opposite (contralateral) side vs. the same (ipsilateral) side: the limb is controlled by the contralateral side, so it’s unsurprising that there’s more variability over there.

Another question the researchers asked was in which kinds of cells was the variability greatest? In primary motor cortex, cells tend to have a preferred direction – i.e. they will fire more when the monkey reaches to a target in that direction than in other directions. The figure below (Figure 5 in the paper) shows the results:

Variability with neural tuning

For both monkeys, it was only the directionally tuned cells that showed the increase in variability (panel A). You can see this even more clearly in panel B, where they aligned the monkeys’ learning phases to look at all the cells together. So it seems that it is primarily the cells that fire more in a particular direction that show the learning-related increase in variability. And panel C shows that it’s cells that have a preferred direction closest to the required movement direction that show the modulation.

(It’s worth noting that on the right of panels B and C is the spike count – the tuned cells have a higher spike count than the untuned cells, but the researchers show in further analyses that this isn’t the reason for the increased variability.)

I’ve only talked about primary motor cortex so far: what about the supplementary motor area? Briefly, the researchers found similar changes in variability, but even earlier in learning. In fact the supplementary motor area cells started showing the effect almost at the very beginning of learning.

Phew. What does this all mean? Well: the fact that there’s increased variability only in the pre-movement states, and only in the directionally tuned cells, suggests a ‘searching’ hypothesis – the system may be looking for the best possible network state before the movement, but only in the direction that’s important for the movement. So it appears to be a very local process that’s confined to cells interested in the direction the monkey has to move to complete the task. And further, this variability appears earlier in the supplementary motor area – consistent with the idea that this area precedes the motor cortex when it comes to changing its activity through learning.

This is really cool stuff. We’re starting to get an idea of how the inherent variability in the brain might actually be useful for learning rather than something that just gets in the way. The idea isn’t too much of a surprise to me; I suggest Read Montague’s excellent book for a primer on why the slow, noisy, imprecise brain is (paradoxically) very good at processing information.


Mandelblat-Cerf, Y., Paz, R., & Vaadia, E. (2009). Trial-to-Trial Variability of Single Cells in Motor Cortices Is Dynamically Modified during Visuomotor Adaptation Journal of Neuroscience, 29 (48), 15053-15062 DOI: 10.1523/JNEUROSCI.3011-09.2009

Images copyright © 2009 Society for Neuroscience