Wednesday, 29 September 2010

Automatic for the reaching

ResearchBlogging.orgHurrah, a post! I've been quite busy over the last few weeks so I haven't had much time for reading or writing. However, I am attempting to repent of my slacking ways and I came across this nice little paper investigating an aspect of the automatic pilot process.

Clearly we are not in conscious control of all of our actions at all times. Some reactions – like moving our hand away when we burn ourselves on a hot stove – are instinctive, reflexive. Reflexes themselves are actually a topic of hot debate in motor control these days, and there are people in my lab (including me) doing some interesting work on long-latency reflexes, which have access to some more complex processing power than the more basic, short-latency spinal reflexes that do things such as get the limb out of danger as fast as possible.

One nice example of this automaticity of behaviour is in what’s known as the double-step task. A participant reaches for a target, and during the reach the target ‘jumps’ to a different location. Without really considering what they are doing, the participant changes the reach to aim for the new target. There have been many studies that have explored various aspects of this automaticity, but the paper I’m discussing today asks the question: does stopping the automatic behaviour require more cognitive resources than letting it continue?

To investigate this question, the authors used a standard target jumping double-step task. They also tried to get participants to use up cognitive resources during the reach by giving them an auditory task to do (listen to a string of numbers and identify the pairs) at the same time. Before each trial, the participant was informed as to whether they should follow the target when it jumped (GO trials) or not to follow the target and instead reach to the original position (NOGO). The results are shown in the graph below (Figure 3 in the paper):

Movement corrections based on time from target jump

Here ‘dual task’ means that the participants were performing the reach and the cognitively demanding auditory task at the same time. The graph shows the percent of trials corrected vs. the time from the instant the target jump happens. Thus at 150 ms almost no corrections are seen, whereas after 300 ms a substantial proportion of trials have been corrected for in both conditions. The unsurprising result here is that in the GO trials there are many more corrections than in the NOGO trials overall – after all, participants were instructed to correct in the GO trials and not in the NOGO trials.

The interesting result is in the grey NOGO traces. There are substantially more corrections in the dual task than in the single task, implying that the extra cognitive load imposed in the dual task actually stopped participants from inhibiting their corrections – whereas in the black GO traces the cognitive load has no effect. This result seems to show that it actually does take cognitive effort to stop the automatic correction than letting it continue.

On the face of it this isn’t too shocking a finding. After all, if you are reacting instinctively (or in a way that reflects a high level of training) to a situation, then stopping yourself from acting that way does seem like it should take some mental effort. I know it takes mental effort for me to stop doing something extremely habitual (to quote Terry Pratchett, the strongest force in the universe is force of habit). It’s nice to see a good paper that solidifies this principle.

--

McIntosh RD, Mulroue A, & Brockmole JR (2010). How automatic is the hand's automatic pilot? Evidence from dual-task studies. Experimental Brain Research, 206 (3), 257-69 PMID: 20820760

Image copyright © Springer-Verlag 2010

Thursday, 2 September 2010

Walking sub-optimally: redux

ResearchBlogging.orgI haven’t done this before but I wanted to revisit the post I made last week about sub-optimal walking in the light of new information. You see, we had a journal club about the paper yesterday in which interesting discussions were had about the paper and the results – and the conclusions drawn from those results.

If you recall, the central thesis of the paper is that we over-correct for deviations in our stride length and stride time that draw us away from the line of constant velocity (the Goal Equivalent Manifold). The evidence for this was a calculation of a parameter labeled α that shows the persistence of a particular variable, i.e. how likely it is to be corrected. This is where the trouble starts.

Unknown to me at the time I wrote the post, the calculation of α only works given a certain set of constraints. For example, imagine that you have a matrix that you wish to invert. (For those who don’t know about matrices: not all matrices are able to be inverted.) So you write a piece of code that inverts the matrix, but in such a way that it never crashes and always returns an answer. Now, if you feed to program a matrix than is non-invertable, it will give you an answer – but that answer doesn’t mean anything. And unfortunately, the calculation of α in this paper has much the same problem.

What this means is that the evidence for the claim the authors are making – that overcorrection is the best way to model human walking variability – is suspect. It’s especially interesting when you look at one of the figures which is used to show that a simpler strategy for treadmill walking, absolute position control (i.e. trying to stay at the same spot on the treadmill). This figure (Figure 4C in the paper) shows the calculation of α for the position on the treadmill:

Persistence for position on treadmill

The value of α here is greater than 1 and goes up to 1.5, so the authors argue that this means there is a high persistence and therefore participants do not correct for absolute treadmill position. But α is undefined over data like this, and it doesn’t go higher than 1! It looks like the problem I outlined above – you get a number out of the program, but the number doesn’t actually mean anything.

So not only might the central claim be undermined, but the contention that we don’t control absolute treadmill position is also questionable. Something to be careful of when looking at papers is always to make sure the methods make sense – I assumed that these methods were adequate for the task they were used for, and apparently so did the reviewers! It is of course possible that the whole thing is fine, but as my colleague Frederic Crevecoeur points out, they could have done a few more tests that demonstrated the validity of these calculations, which would make these points moot.

Regardless of whether the central claim is correct, it is admirable that this is the first paper to really attempt to use stochastic optimal control models to look at walking. Apparently they have more in the works; I look forward to seeing it!

--

Dingwell JB, John J, & Cusumano JP (2010). Do humans optimally exploit redundancy to control step variability in walking? PLoS computational biology, 6 (7) PMID: 20657664

Image copyright © 2010 Dingwell, John & Cusumano

Monday, 30 August 2010

A stimulating time

ResearchBlogging.orgWhile I don’t usually post about clinical work, sometimes a paper just leaps out at me and makes me go, “hmm, that’s interesting!” So it was with this study, which explores the medium-term effects (over five years) of chronic deep-brain stimulation in Parkinson’s disease (PD). I’m by no means a clinician or an expert on PD so I’m very keen to make sure the information in here is correct. Please leave me useful comments if it isn’t!

Parkinson’s disease is a neurodegenerative disease affecting the motor system. It’s characterised by several symptoms, with the one most people connect to Parkinson’s being a persistent awake resting tremor that disappears with voluntary movement and sleep. Other symptoms include increased rigidity, slow movements (bradykinesia) and postural instability. There are also often substantial cognitive impairments as the disease progresses. The symptoms appear to be caused by the death of cells in the basal ganglia that produce the neurotransmitter dopamine. The reason for this cell death is still not understood.

Treatment is available for Parkinson’s, most commonly in the form of L-DOPA, a drug that at first replenishes the amount of dopamine in the system and thus relieves symptoms somewhat. It does have side-effects and becomes less effective over time however, and other drugs are also used to control the symptoms. Relatively recently, deep brain stimulation (DBS) has come to the fore as an effective treatment, especially when drugs aren’t working. The idea is that an electrode is inserted deep into the brain and areas of the basal ganglia are stimulated with electrical impulses to regulate their output, reducing the symptoms.

Because DBS is still quite new, we don’t really know what its long-term effects are. Short-term the effects are spectacular; see this video of a patient with and without his DBS system switched on. It’s quite dramatic (he turns it off at about 1:25):




But what about in the medium to long term? In the paper I discuss today, the researchers followed up eight patients after five years of DBS to see whether there was any effect on either their clinical symptoms or measures of motor performance. Over these five years the patients received continuous DBS and also a drug regimen, adjusted to control their symptoms as required. Symptoms were measured using the Unified Parkinson’s Disease Rating Scale (UPDRS) and the motor kinematics measured were ankle movement speed and strength. Prior to testing, patients stopped taking drugs and turned off their stimulators for 12 hours overnight.

The experimenters tested the patients both with DBS turned on and off at the start of the experiment (year 0) and again five years later (year 5). Their main findings were that, as expected, DBS reduced symptoms and improved movement speed and strength overall – both at year 0 and year 5. When comparing the two time periods however, they found an interesting result. UPDRS scores increased over five years, i.e. symptoms got worse, but the speed and strength of the ankle movement actually improved. So it looks like DBS gave no long-term improvement on the UPDRS scores but did produce an improvement in mobility and strength.

How can this apparent contradiction be explained? Well, first it’s quite difficult to say what would have happened without DBS over five years, as there was no adequate control group in this study. As Parkinsonism is a degenerative disease, the UPDRS scores would most certainly have got worse over five years anyway. But being able to say for sure whether the DBS reduced this worsening of symptoms is very hard to say. The researchers have a go at saying why this measure didn’t improve versus the other motor measures improving though: the UPDRS measurement involves repetitive movements like finger tapping, which the basal ganglia are heavily involved in; whereas the ankle movements tested for strength and speed are discrete movements that don’t really need coordinated muscle output over time – so they aren’t as regulated by the basal ganglia and therefore aren’t as affected by Parkinson’s.

There’s also the possibility that DBS increases dopamine production (aside from just regulating the output of the basal ganglia) and that this actually increases motivation and “energise” action, which is known to improve muscle strength. Also, if DBS improves quality of life and makes patients more active, their muscle strength and speed will change purely as a result of using their muscles more.

So there’s quite a lot here for a relatively short paper. The most interesting point from a basic science perspective I think is the contention that the basal ganglia don’t really have much to do with large discrete movements, which is why the symptom scores get worse (as they’re based on repetitive movements). It’s certainly plausible, though I’d be wary of reading too much into it.

From a clinical point of view though I guess the most interesting finding is that sustained DBS does improve motor outcomes over the medium term. But a weakness of this work is that without, as I say, an adequate non-stimulated control group, it’s very difficult to say whether it has any effect on the UPDRS scores differential to if the patients were not stimulated. Of course, there are ethical issues with not giving people the best treatment currently available just so you can test how they compare to people who are.

---

Sturman MM, Vaillancourt DE, Verhagen Metman L, Bakay RA, & Corcos DM (2010). Effects of five years of chronic STN stimulation on muscle strength and movement speed. Experimental Brain Research, 205 (4), 435-43 PMID: 20697699

Friday, 27 August 2010

Walking sub-optimally is the way forward

ResearchBlogging.orgToday we’re going to do something a little different. I’ve been posting a lot about reaching movements, because that’s what I’m most interested in, but it may surprise you to learn that humans do actually have the capacity to move other parts of their bodies as well. I know, I’m as shocked as you are… so! The paper I’m going to cover is about the regulation of step variability in walking. It’s a little longer and more complex than normal, so strap yourselves in.

Walking is a hard problem, and we’re not really sure how we do it. Like reaching, there are many muscles to coordinate in order to make a step forward. Unlike in arm reaching, these coordinated steps need to follow one another cyclically in such a way as to keep the body stable and upright while simultaneously moving it over terrain that might well be rough and uneven. Just think for a moment about how difficult that is, and what different processes might be involved in the control of such movements.

One question that remains unanswered is how we control variability in walking. It’s a simple matter to control average position or velocity, but the variation in these parameters between steps is still unexplained. It is pretty well established that over the long-term people tend to try to minimize energy costs while walking – hence the gait we learn to adopt over the first few years of life. But there’s evidence that such a seemingly “optimal” strategy is not the whole story.

Consider walking on a treadmill. What’s the primary goal of continuous treadmill walking? Well, it’s to not fall off. The researchers in the article took that idea and reasoned that because the treadmill is moving at a constant speed, the best way not to fall off is to move at a constant speed yourself. That’s not the only strategy of course – you could also do something a little more complicated like make some short, quick steps followed by some long, slow ones in sequence, which would also keep you on the treadmill.

To test how the parameters varied, the researchers used five different walking speeds. You can see this in the figure below (Figure 3 in the paper):

Human treadmill walking data with speed as percentage of preferred walking speed (PWS)

L is stride length, T is stride time and S is stride speed. So A-C in the figure show how these values change with the five different treadmill speeds – length increases, time decreases and speed increases. D-F show the variability (σ) in these different parameters. G-I show something slightly more complex: a value called α that is defined as a measure of persistence, i.e. how much or little the parameters were corrected on subsequent strides. Values of α > ½ mean that there was less correction, whereas values < ½ mean that there was more correction. So panels G-I show that variability in stride length and time were not generally corrected quickly, but that variations in stride speed were.

Read that last paragraph through again to make sure you get it. It will be important shortly!

So: now we have a measure of human walking parameters. The question is, how are these parameters produced by the motor control system? That is, what does the system care about when it initiates and monitors walking? Well, one thing we can get from the data here is that the system seems to care about stride speed, but doesn’t care about stride time and stride length individually. And if that’s the case, then as long as the coupled length and time lie on a line that defines the speed, the system should be happy. A line a bit like this (figure 2B in the paper):

Human stride parameters lie along line of constant speed

The figure shows the GEM (which stands for Goal Equivalent Manifold, essentially the line of constant speed) plotted against stride time and stride length. The red dots show some data. Right away you can see that the dots generally lie along the line. Ignore the green arrows, but do take note of the blue ones – they’re showing a measure of deviations tangent to (δT) and perpendicular to (δP) the line. Why is δT so much bigger than δP? Because perpendicular variations push you off the line and thus interfere with the goal, whereas tangential variations don’t. So the system is either not stepping off the line much in the first place or correcting heavily when it does.

Here’s one more figure (Figure 5C and D in the paper) showing the variability (σ) and persistence (α) for δT and δP :

Variability and persistence of deviations

You can see that δT is much more variable than δP, as you might expect from the shape of the data shown in the second figure. You can also see something else, however: the persistence for δP is less than ½, whereas the persistence for δT is greater than ½. Thus, the system cares very much about correcting not just stride speed but the combination of stride time and stride length that take the stride speed away from the goal speed.

Great, you may think, a lot of funny numbers to tell us that the system cares about maintaining a constant speed when it’s trying to maintain a constant speed! What do you scientists get paid for anyway? The cool thing about this paper is that the researchers are trying to figure out precisely how the brain produces these numbers. It turns out that if you just use an ‘optimal’ model that corrects for δP while ignoring δT, you don’t get the same numbers. So that can’t be it. How about if you specify in your model that you have to keep at a certain speed – say the same average speed as in the human data? That doesn’t work either. The numbers are better, but they’re not right.

The solution that seems to work best is when the deviations off the GEM line (i.e. δP) are overcorrected for. This controller is sub-optimal, so basically efficiency is being sacrificed for tight control over this parameter. Thus, humans don’t appear to simply minimize energy loss – they also perform more complex corrections depending on the task goal.

I’ve covered in a previous post the inkling that this might be the case; while we do tend to minimize energy over the long term, in the short term the optimization process is much more centred around the particular goal, and people are very good at exploiting the inherent variability in the motor system to perform the task more easily. This paper does a great job of testing these hypotheses and providing models to explain how this might happen. What I’d be interested to see in the future is an explanation of why the system is set up to overcorrect like that in the first place – is it overall a more efficient way of producing movement than just a standard optimization over all parameters? Time, perhaps, will tell.

--

Dingwell JB, John J, & Cusumano JP (2010). Do humans optimally exploit redundancy to control step variability in walking? PLoS computational biology, 6 (7) PMID: 20657664

Images copyright © 2010 Dingwell, John & Cusumano

Monday, 23 August 2010

Learning without thinking

ResearchBlogging.orgScratching around on the internet this afternoon on my first day back from holiday, I was kind of reluctant to dive straight back into taking papers apart. After all, I have spent the majority of the last three weeks drinking beer and eating pies in the UK, and the increase in my waistline has most likely been mirrored by the decrease in my critical faculties (as happens when you spend time away from the cutting edge). However, I ran across a really cool little article that reminded me just why I enjoy all this motor control stuff. So here goes nothing!

There’s been some work in recent years on the differences between implicit and explicit motor learning – that is, the kind of learning the brain does by itself, relying on cues from the environment, vs. using a well-defined strategy to perform a task. For example, learning to carry a full glass of water without spilling by just doing it and getting it wrong a lot until you implicitly work out how, or by explicitly telling yourself, “Ok, I’m going to try to keep the water as level as possible.” A fun little study on this was performed by Mazzoni and Krakauer (2006) in which they showed that giving their participants an explicit strategy in a visuomotor rotation task (reaching to a target where the reach is rotated) actually hurt their performance. Essentially they started off being able to perform the task well using the explicit strategy, which was something like ‘aim for the target to the left of the one you need to hit’. However as the task went on the implicit system doggedly learned it – and conflicted with the explicit strategy – so that the participants were making more errors at the end than at the beginning.

The paper I’m looking at today follows up on this result. Implicit error-based learning is thought to be the province of the cerebellum, the primitive, walnut-shaped bit at the back of the brain. The researchers hit upon the idea that if the cerebellum is important for implicit learning, then perhaps patients with cerebellar impairments would actually find it easier to perform the task relative to healthy control participants. To test this, they told both sets of participants to use an explicit strategy in a visuomotor rotation task, just like in the previous study, and measured their ‘drift’ from the ideal reaching movement.

Below you can see the results (Figure 2A in the paper):

Target error across movements

Open circles are all control participants, whereas filled circles are all patients. The black circles at the start show baseline performance – both groups performed pretty well and similarly. Red circles show the first couple of movements after the rotation was applied, and before participants were told to use the strategy. You can see that the participants are reaching completely the wrong way. The blue section shows reaching while using the strategy. Here’s the nice bit: the cerebellar patients are doing better than the controls, as their error is closer to zero, whereas the controls are steadily drifting away from the intended target. Magenta shows when the participants are asked to stop using the strategy and the final cyan markers show the ‘washout’ phase as both groups get back to baseline without an imposed rotation – though the patients manage much more quickly than the controls.

So it looks very much like the cerebellar patients, because their cerebellums are impaired at implicit learning, are able to perform this task better than healthy people. What’s kind of interesting is that other research has shown that cerebellar patients aren’t very good at forming explicit strategies on their own, which is something that healthy people do without even thinking about it. The tentative conclusion of the researchers is that it’s not so much that the implicit and explicit systems are completely separate, but that the implicit system can inform the development of explicit strategies – which is impaired if the cerebellum isn’t working properly.

I didn’t like everything in this paper. I was particularly frustrated with the method section, which talks about the kind of screen they used. I wasn’t sure whether the images shown to participants were on a screen in front of them or whether the screen was placed over the workspace in a virtual-reality setup; it was unclear. There was also a sentence claiming that the cerebellar patients’ performance was ‘less’ than the controls, when in fact it was better. Other than these minor niggles though, it’s a really nice paper showing a very cool effect.

--

Taylor JA, Klemfuss NM, & Ivry RB (2010). An Explicit Strategy Prevails When the Cerebellum Fails to Compute Movement Errors Cerebellum PMID: 20697860

Images copyright © 2010 Taylor, Klemfuss & Ivry

Wednesday, 4 August 2010

Hiatus

I'm about to head to the UK for a couple of weeks to visit the Edinburgh Festival, attend (and sing at!) a wedding, and see my friends and family. So the blog will be on hiatus for a little while. I might try to get a couple of papers read while I'm away, but don't count on it.

I'll be back the weekend of August 21st - expect to see more motor control discussion around then.

Tuesday, 27 July 2010

The noisy brain

ResearchBlogging.orgNoise is a funny word. When we think of it in the context of everyday life, we tend to focus on distracting background sounds. Distracting from what? Usually whatever we’re doing at the time, whether it’s having a conversation or watching TV. In most cases, what we’re trying to do is interpret some signal – like speech – that’s corrupted by background noise. Neurons in the brain have also often been thought of as sending signals corrupted by noise, which seems to make intuitive sense. But that’s not quite the whole story.

The very basics: neurons ‘fire’ and send signals to one another in the form of action potentials, which can be recorded as ‘spikes’ in their voltage. So when a neuron fires, we call that a spike. The spiking activity of neurons has an inherent variability, i.e. neurons won’t always fire in the same situations each time, probably due to confounding inputs from metabolic and external inputs (like sensory information and movement). In other words, the signal is transmitted with some background ‘noise’. What’s kind of interesting about this paper (and others) is that variability in the neural system is starting to be thought of as part of the signal itself, rather than an inherently corrupting influence on it.

Today we delve back into the depths of neural recording with a study that investigates trial-to-trial variability during motor learning. That is: how does the variability of neurons change as learning progresses, and what can this tell us about the neural mechanisms? This paper gets a bit technical, so hang on to your hats.

One important measure used in the paper is something called the Fano Factor. The variability in neuronal spiking is dependent on the underlying spiking rate, i.e. as the amount of spiking increases, so does the variability; this is known as signal-dependent noise. This effect means that we can’t just look at the variability in the spiking activity – we actually have to modify it based on the average spiking activity. The Fano Factor (FF) does precisely this (you can check it out at the Wiki link above if you like). It’s basically just another way of saying ‘variability’ – I mention it only because it’s necessary to understand the results of the experiment!

Ok, enough rambling. What did the researchers do? They trained a couple of monkeys on a reaching task where they had to learn a 90° visual rotation, i.e. they had to learn to reach to the right to hit a target in front of them. While learning, their brain activity was recorded and the variability was analysed in two time periods: before the movement, termed ‘preparatory activity’ and during the movement onset, termed ‘movement-related activity’. Neurons were recorded from the primary motor cortex, which is responsible for sending motor commands to the muscles, and the supplementary motor area, which is a pre-motor area. In the figure below, you can see some results from motor cortex (Figure 2 A-C in the paper):

Neural variability and error over time

Panel B shows the learning rate of monkeys W (black) and X (grey) – as the task goes on, the error decreases, as expected. Note that monkey W is a faster learner than monkey X. Now look at panel A. You can see that in the preparatory time period (left) variability increases as the errors reduce for each monkey – it happens first in monkey W and then in monkey X. In the movement-related time period (right) there’s no increase in variability. Panel C just shows the overall difference in variability in motor cortex on the opposite (contralateral) side vs. the same (ipsilateral) side: the limb is controlled by the contralateral side, so it’s unsurprising that there’s more variability over there.

Another question the researchers asked was in which kinds of cells was the variability greatest? In primary motor cortex, cells tend to have a preferred direction – i.e. they will fire more when the monkey reaches to a target in that direction than in other directions. The figure below (Figure 5 in the paper) shows the results:

Variability with neural tuning

For both monkeys, it was only the directionally tuned cells that showed the increase in variability (panel A). You can see this even more clearly in panel B, where they aligned the monkeys’ learning phases to look at all the cells together. So it seems that it is primarily the cells that fire more in a particular direction that show the learning-related increase in variability. And panel C shows that it’s cells that have a preferred direction closest to the required movement direction that show the modulation.

(It’s worth noting that on the right of panels B and C is the spike count – the tuned cells have a higher spike count than the untuned cells, but the researchers show in further analyses that this isn’t the reason for the increased variability.)

I’ve only talked about primary motor cortex so far: what about the supplementary motor area? Briefly, the researchers found similar changes in variability, but even earlier in learning. In fact the supplementary motor area cells started showing the effect almost at the very beginning of learning.

Phew. What does this all mean? Well: the fact that there’s increased variability only in the pre-movement states, and only in the directionally tuned cells, suggests a ‘searching’ hypothesis – the system may be looking for the best possible network state before the movement, but only in the direction that’s important for the movement. So it appears to be a very local process that’s confined to cells interested in the direction the monkey has to move to complete the task. And further, this variability appears earlier in the supplementary motor area – consistent with the idea that this area precedes the motor cortex when it comes to changing its activity through learning.

This is really cool stuff. We’re starting to get an idea of how the inherent variability in the brain might actually be useful for learning rather than something that just gets in the way. The idea isn’t too much of a surprise to me; I suggest Read Montague’s excellent book for a primer on why the slow, noisy, imprecise brain is (paradoxically) very good at processing information.

--

Mandelblat-Cerf, Y., Paz, R., & Vaadia, E. (2009). Trial-to-Trial Variability of Single Cells in Motor Cortices Is Dynamically Modified during Visuomotor Adaptation Journal of Neuroscience, 29 (48), 15053-15062 DOI: 10.1523/JNEUROSCI.3011-09.2009

Images copyright © 2009 Society for Neuroscience

Wednesday, 21 July 2010

Lazy beats sloppy

ResearchBlogging.orgToday I give in to my inner lazy person (who is, in fact, quite similar to my outer lazy person) and talk about a paper after I’ve just been to a journal club, rather than before. The advantages are that I was reading the paper anyway and I’ve just had an hour of discussion about it so I don’t actually have to think of things to say about it myself. The disadvantages are that, um, it’s lazy? And that’s bad? Perhaps. But I still think it’s better, as we shall see, than sloppy.

The premise of the paper harks back to my earlier post on visual dominance and multisensory integration. It’s been well known in the literature for a while that if you flash a couple of lights while at the same time playing auditory beeps, an interesting little illusion occurs. If participants are asked to count the number of flashes, and they’re the same as the number of beeps, then they almost always get the answer right. But if there are two flashes and one beep, or one flash and two beeps, then they’re much more likely to say there was one or two flashes respectively. The figure below (Figure 1 in the paper) illustrates this:

Illusion when the hand is at rest

In the figure, you can see that the bar for one beep and one flash (far left black bar) and two beeps and two flashes (far right white bar) are at heights 1 and 2 respectively, which illustrates the number of perceived flashes. That is, the number of perceived flashes is just what you’d expect – one for one flash, two for two flashes. However the middle bars, which show the one beep/two flash and two beep/one flash conditions, are at intermediate heights, showing the presence of the illusion. This figure actually demonstrates the first problem with the paper, which is that the figures are pretty difficult to interpret. I know I wasn’t alone in the lab at finding them confusing anyway.

What the authors were interested in is whether a goal-directed movement could alter visual processing, and they used the illusion to probe this. Participants had to make point-to-point reaches from a start point to a target. During the reach their susceptibility to the illusion was tested at the target point – but the test began a variable time away from the start of the movement, between 0 and 250 ms. That is: sometimes the flashes and beeps occurred at the start of the movement when the arm was moving slowly, sometimes when it was half way through and thus moving faster, and sometimes at the end when it was moving slowly again.

The experimenters found that, when there were two flashes and one beep, participants were less likely to see an illusion during the middle part of their movement than during the beginning and end. That is, they were more likely to get it right when they were moving faster. The trouble starts when you look a bit closer at the effect they’ve got – it’s pretty weak. There seems to be a lot of noise in the data, and the impression that they’re grasping at straws a little isn’t helped by the aforementioned sloppy figures.

Having said that, the stats do hold up. What might be the explanation for this kind of effect? The multisensory integration argument is that the sensory modality (e.g. vision) with the least noise should be the one that is prioritized. So when the arm is moving quickly, there’s more noise in the motor system compared with the visual system and thus you’re better at determining how many flashes there are. I’m not sure I buy this; the illusion is about the visual and auditory systems, after all. I’m not sure I get why you’d be better at detecting an illusion when you’re moving than when you’re not moving, for example. The authors claim that the limb movement “requires extensive use of visual information” but again I’m not so sure. When we reach for objects we generally take note of where our arm is, look at the object and then move the arm to the object without looking at it again.

So, a weak effect that isn’t well explained. That wouldn’t be so bad, but the clarity of the paper is also lacking. There’s also the question of why, if they had such a weak effect, they didn’t do another experiment or two to tease out what was really going on. I do think the slightly larger problem here is the review process at PLoS. It’s open access so anyone can read it free online, which I am very much in favour of, but it’s biased towards only reviewing the methods and results of a paper rather than the introduction/discussion. I go back and forth over whether this is a good thing. Some journals reject papers based on novelty (a.k.a. coolness) whereas it appears that PLoS strives to accept well-performed science regardless of how ‘interesting’ (and I use the term in quotes advisedly) the result is.

In this case I think that, while the science is good, it would be a much better paper if it went a bit more into depth with a couple of extra experiments exploring these effects more carefully – and if it had figures that were perhaps a bit easier to comprehend.

--

Tremblay L, & Nguyen T (2010). Real-time decreased sensitivity to an audio-visual illusion during goal-directed reaching. PloS one, 5 (1) PMID: 20126451

Image copyright © 2010 Tremblay & Nguyen

The name defines the thing

So, astute readers may notice a name change here - I've decided to go back to my old WordPress blog title (which never had more than five posts over its year-long lifespan, a perfect example of my habit of enthusiastically starting projects and never following through). I used to own the domain motorchauvinist.com but no longer. Oh well. Blogger will do for the moment.

Why motor chauvinism? I'd like to disassociate myself from the idea that I am in any way interested a) in cars and b) in denigrating women! I first came across the term in a paper written in 2001:
"From the motor chauvinist's point of view the entire purpose of the human brain is to produce movement." -- Wolpert, Ghahramani & Flanagan (2001)
The authors go on to explain how movement underlies everything we do, our every interaction with the world, all communication (speech, sign language, gestures and writing), and so on and so forth. While I rather like the idea I want to make clear that this blog isn't going to specifically advocate for the notion. Rather, I just thought that since it's pretty specifically about movement neuroscience and not just about reading random papers it might be fun to redefine it a little more sharply.

I hope to have some guest posters who will be able to talk more about things that I don't know much about, but that's a plan for later. Right now, welcome again to my blog, which will be doing the same kinds of things it has been doing for a couple of months, just under a different name.

Oh - I apologise for those who have linked to the blog under the previous name, as those links are now unlikely to work. That's why I've changed the name now rather than after a couple more months. Assuming, as I note above, that I stick with it...

The paper that the quote is from is very good, by the way, and you should definitely read it if you can. It also contains Calvin & Hobbes cartoons. What's not to like?

--

Wolpert DM, Ghahramani Z, & Flanagan JR (2001). Perspectives and problems in motor learning. Trends in cognitive sciences, 5 (11), 487-494 PMID: 11684481

Monday, 19 July 2010

Far out is not as far out as you think

ResearchBlogging.orgProprioception is the sense of where your body is in space. It is one of several ways the brain uses sensory information to figure out where your limbs and the rest of you are, along with vision and the semicircular ear canals of the vestibular system (though these are more important in balance). Proprioception is defined as information from the lengths of muscles, the location of joints and receptors in the skin that tell us how much we have stretched it.

How, if at all, does the accuracy and precision of this information vary across different tasks and limb configurations? To test this, the authors of today’s study got their participants to perform three experimental tasks that involved matching perceived limb position without being able to see their arm. In the first task, participants used a joystick to rotate a virtual line on a screen positioned over their limb until they decided that it was in the same direction as their forearm. In the second task, they used a joystick to move a dot around until they decided that it was over their index finger. In the third task, they again saw a virtual line on the screen, but this time they had to actively move their forearm until they decided they were in line with it.

The results were kind of interesting: in all three cases, participants tended to overestimate the position of their limbs when they were at extremes; i.e. when they were more flexed they assumed they were even more flexed, and when they were more extended they assumed they were even more extended. This is quite confusing to explain, but the figure below (Figure 4A in the paper) should help:

Estimates of arm position from one participant

The black lines are the actual position of the arm of a representative participant in task 1, with flexion on the left and extension on the right. Blue lines are the participant’s estimates of arm position, and the red line is the average of the estimates. You can see that when the arm is flexed the participant guesses that it’s more flexed than it actually is, with the corresponding result for when the arm is extended. The researchers found no differences in accuracy between the three tasks, but they did find differences in precision – participants were much more precise, i.e. the spread of their responses was lower, in the passive fingertip task and the active elbow movement task (tasks 2 and 3).

So what? Well, these results give us an insight into how proprioception works. The authors argue that the bias towards thinking you’re more flexed/extended than you really are comes from the overactivity of joint and skin receptors as the limb reaches its extreme positions. Why might these receptors become overactive at extreme positions? Possibly because it allows us to sense ahead of time when we’re getting to a point of movement that is mechanically impossible for the limb to perform, either because we’re trying to flex it too much or we’re trying to straighten it too much. Push too hard at either extreme – muscles are quite strong – and you could damage the limb. Better for the system to make you stop pushing earlier by giving you a signal that you’re further along than you thought. I think it’s a nice hypothesis.

I quite like this study, as it’s another one of those not-wildly-exciting-but-useful-to-know kinds of papers. While the wildly exciting stuff is great, I think that too often the worthy, low-key stuff like this is unfairly overshadowed. Science is about huge leaps and paradigm shifts much less than it’s about the slow grind of data making possible incremental progress on various questions. And I’m not just saying that because that’s what all my papers are like!

---

Fuentes, C., & Bastian, A. (2009). Where Is Your Arm? Variations in Proprioception Across Space and Tasks Journal of Neurophysiology, 103 (1), 164-171 DOI: 10.1152/jn.00494.2009

Image copyright © 2010 The American Physiological Society

Monday, 12 July 2010

It's better to keep what works than to try something new

ResearchBlogging.orgIt seems I just can’t leave this topic alone. Last week I blogged about a paper on use-dependent learning, which discussed how it’s not only the errors you make that contribute to your learning of a motor task, but that your movements become more similar to movements you’ve already made. Today’s paper deals with something similar, but from a different perspective: that of optimal feedback control.

I discussed OFC in another previous post, but a quick recap of the theory is that to make a movement the brain needs to optimize the motor commands it sends out to control both effort (or noise in the system) and error (i.e. how far off the target you are). So an optimal solution to reaching for a pint in the pub should involve the minimization of both error and effort to acquire the target in a timely manner.

In the study I’ll discuss today, the authors make the claim that if this optimization happens at all it is local, not global. That is, people tend not to optimize to find the best possible solution, but rather they optimize until they find one that works well enough and then stick to it – even when there’s a better solution overall. To investigate this, the experimenters attached participants to a robotic wrist device that pushed their wrist back and forth at a certain frequency. Participants saw a visual target on the screen and a cursor representing their wrist amplitude; they had to keep the amplitude below a certain level to keep the cursor in the target.

The task was rather cunningly set up so that the participants could perform it in one of two ways: either by co-contracting their wrist muscles strongly against the perturbation, or by relaxing the muscles, which obviously requires less effort. (For an analogy, imagine riding a bike down a cobbled hill; you can either make the handlebars really stiff or relax and let the jolting push you around a bit, but if you do something in the middle the jolting will make you fall over.) Participants were either given ‘free’ trials where they could choose which strategy to use, or ‘forced’ trials where they were pushed into a certain strategy at the start of the task by visual feedback.

After being given three ‘free’ trials they were then given three ‘forced’ trials in the strategy they didn’t pursue the first time, so if they had freely chosen the ‘relaxed’ strategy, they were pushed into the ‘co-contract’ strategy. Then they were given three more ‘free’ trials and then three more ‘forced’ trials in the other strategy, and finally three more ‘free’ trials. You can see a representative participant in the figure below (part of Figure 2A in the paper):


Co-activation in one representative participant across time

Here the dark areas are areas of low movement amplitude at certain levels of maximum voluntary co-activation – i.e. they’re the areas you want to stay in to perform the task correctly. If you co-contract too much or too little, you’ll end up in the white area in the middle and you’ll fail the task. The traces show the five sets of trials: the first ‘free’ set is white, then the first ‘forced’ set is blue, then the next ‘free’ set is green, then the next ‘forced’ set is yellow, and the final ‘free’ set is red. What you can see clearly from this graph is that participants tended to follow in the ‘free’ trials where they’d been pushed in the previous set of ‘forced’ trials, regardless of whether it was actually a lower-effort solution. That is, subjects tended to do what they’d done before, whether or not it was a better solution.

Sound familiar? Like in use-dependent learning, participants tended to do things they’d already done rather than make a new solution. And again, it makes sense to me that this would happen. The authors in this paper argue that the brain is forming ‘motor memories’ that are also used in the optimization process, and that the optimization itself is thus local and not global. I guess I can buy that, but only in the sense that these ‘motor memories’ are patterns of activation that have been learnt by the network. It takes metabolic energy to create new connections and learn a new pattern, so any optimization process would have to take this into account along with error and effort.

It might even explain the existence of straight line movements in non-optimal situations; if you’ve moved in straight lines all your life because it’s an efficient and effective way to move, then if you’re suddenly placed in an environment where moving in a straight line is more effortful and therefore non-optimal, it’s going to be very difficult to unlearn that deep network optimization you’ve been creating your whole life.

There’s more to the paper too, I think it’s great.

---

Ganesh, G., Haruno, M., Kawato, M., & Burdet, E. (2010). Motor memory and local minimization of error and effort, not global optimization, determine motor behavior Journal of Neurophysiology DOI: 10.1152/jn.01058.2009

Image copyright © 2010 The American Physiological Society

Thursday, 8 July 2010

Motor learning changes where you think you are

ResearchBlogging.orgI’ve covered both sensory and motor learning topics on this blog so far, and here’s one that very much mashes the two together. In earlier posts I have written about how we form a percept of the world around us, and about our sense of ownership of our limbs. In today’s paper the authors investigate the effect of learning a motor task on sensory perception itself.

They performed a couple of experiments, in slightly different ways, which essentially showed the same result – so I’ll just talk about the first one here. Participants had to make point-to-point reaches while holding a robotic device in three phases (null, force field and aftereffect) separated by perceptual tests designed to assess where they felt their arm to be. The figure below (Figure 1A in the paper) shows the protocol and the reaching error results:

Motor learning across trials

In the null phase, as usual, participants reached without being exposed to a perturbation. In the force field phase, the robot pushed their arm to the right or to the left (blue or red dots respectively), and you can see from the graph that they made highly curved movements to begin with and then learnt to correct them. In the aftereffect phase, the force was removed, but you can still see the motor aftereffects from the graph. So motor learning definitely took place.

But what about the perceptual tests? It turns out that participants’ estimation of where their arm was changed after learning the motor task. In the figure below (Figure 2B and 2C in the paper) you can see in the left graph that after the force field (FF) trials, hand perception shifted in the opposite direction to the force direction. [EDIT: actually it's in the same direction; see the comments section!] This effect persisted even after the aftereffects (AE) block.


Perceptual shifts as learning occurs

What I think is even more interesting is the graph on the right. It shows not only the right and left (blue and red) hand perceptions, but also the hand perception after 24 hours (yellow) – and, crucially, the hand perception when participants didn’t make the movements themselves but allowed the robot to move them (grey). As you can see, there’s no perceptual shift. It only appears to happen when participants make active movements through the force field, which means that the change in sensory perception is closely linked to learning a motor task.

In some ways this isn’t too surprising, to me at least. In some of my work with Adrian Haith (happily cited by the authors!), we developed and tested a model of motor learning that requires changes to both sensory and motor systems, and showed that force field learning causes perceptual shifts in locating both visual and proprioceptive targets; you can read it free online here. The work in this paper seems to shore up our thesis that the motor system takes into account both motor and sensory errors during learning.

Some of the work I’m dabbling with at the moment involves neuronal network models of motor learning and optimization. This kind of paper, showing the need for changes in sensory perception during motor learning, throws a bit of a cog into the wheels of some of that. As it stands the models tend to assume sensory input as static and merely change motor output as learning progresses. Perhaps we need to think a bit more carefully about that.

---

Ostry DJ, Darainy M, Mattar AA, Wong J, & Gribble PL (2010). Somatosensory plasticity and motor learning. The Journal of Neuroscience, 30 (15), 5384-93 PMID: 20392960

Images copyright © 2010 Ostry, Darainy, Mattar, Wong & Gribble

Monday, 5 July 2010

Baby (not quite) steps

ResearchBlogging.orgMany non-scientists misunderstand the basic way science works. While there are indeed huge discoveries that fundamentally change the way we think about things, the vast majority of the time published papers are a steady plod onwards, adding in very modest amounts to the staggering array of human knowledge. Often seismic shifts in scientific opinion don’t come from great discoveries but from many scientists reading the literature and arguing among themselves and coming to different conclusions from the slow-burn of new thoughts and experiments. Such is the case with this paper: it is no Nobel prize-winner but a small and useful addition to the literature.

Also, it is about babies. Yay babies!

Babies: hard to test but fun

Babies are hard to test. This is true for several reasons: they can’t give informed consent to studies, they can’t follow instructions and they can’t give verbal feedback. But that doesn’t stop people trying. Parents can give consent for their children; behaviours can be elicited by non-verbal means and recorded in lieu of verbal feedback. And of course it’s interesting to study babies in the first place to look at the development of the motor system.

In this paper, the authors look at clinical observation of four motor behaviours: abdominal progression (i.e. crawling), sitting motility, reaching and grasping motility. There are two distinct stages in infant motor development after birth that the authors identify: primary variability and secondary variability. General movements of the whole body that don’t appear to be geared towards accomplishing a task characterize primary variability. Secondary variability is much more task-specific and can be adapted to specific situations. It’s the transitions from primary to secondary variability in various motor behaviours that the authors are interested in.

To test when their infant participants began to make adaptive movements, they tested various children at various intervals ranging from 3 months to 18 months. Different types of movements were induced– for example, trying to get children to reach for toys or crawl towards them. The movements were recorded on video and two of the study’s authors scored the videos for whether the movements showed ‘no selection’ or ‘adaptive selection’. Since I am interested mainly in reaching, here are the results from the reaching scores (Figure 4 in the paper):

Selection in infant reaching movements across development

You can see that as the age of the baby increases in months, more ‘no selection’ movements occur (hatched bars). Then between 6-8 months you start getting ‘adaptive selection’ movements (black bars), which increase significantly in frequency between 6 and 8 months and between 12 and 15 months.

When rating videos like this, the reliability of the rating is very important. The authors tested inter-rater reliability by having two raters, but also intra-rater reliability by having the same rather rate the video once and then again after a month. Mostly they found that the reliability was very high, though it seems to me that they should perhaps have had a couple more raters in there just in case. To their credit, they do admit this as a limitation of their study.

So assuming that the rating is reliable, what do we now know? Well, it’s kind of interesting that for the four behaviours observed, the onset from the video ratings is a few months later in all cases than when you do neurophysiological testing (as people have done before). That is, if you measure brain activity (see the first picture in this post!) or muscle activity, you can observe patterns of motor activity that become noticeably more synchronized way before you can observe these changes by eye.

It’s useful to know this because you can’t hook every baby that comes into your busy clinic to a set of wires to record their brain and muscle activity, nor spend hours analyzing the results from these investigations. What you can do as a busy clinician is take note of the types of movements and when the transitions appear – as the authors note at the end, it would be interesting to do this kind of study on the ages of transition in infants with high probability of developing motor disorders (such as cerebral palsy).

Overall verdict: a nice short study with some possible clinical impact.

---

Heineman, K., Middelburg, K., & Hadders-Algra, M. (2010). Development of adaptive motor behaviour in typically developing infants Acta Paediatrica, 99 (4), 618-624 DOI: 10.1111/j.1651-2227.2009.01652.x

Baby EEG image copyright © 2010 Apple Inc.

Image from paper copyright © 2009 Heineman, Middleburg & Hadders-Algra

Wednesday, 30 June 2010

Errors and use both contribute to learning

ResearchBlogging.orgLearning how to make a reaching movement is, as I’ve said before, a very hard problem. There are so many muscles in the arm and so many ways we can get from one point to another that there are for all intents and purposes an infinite set of ways the brain could choose to send motor commands to achieve the same goal. And yet what we see consistently from people is a very stereotyped kind of movement.

How do we learn to make reaching movements in the presence of destabilizing perturbations? The standard way of thinking about this assumes that if you misreach, your motor system will notice the error and get better next time, whether it’s through recalibration of the sensory system or through a new cognitive strategy to better achieve the goal. But this paper from Diedrichsen et al. (2010) postulates another learning mechanism than error-based learning: something they call use-dependent learning.

The basic idea is that if you’re performing a task, like reaching to an object straight ahead, and you’re constantly getting pushed off to the side, you’ll correct for these sideways perturbations using error-based learning. But you’re also learning to make movements in the non-perturbed direction, and the more you make these movements the more experience you have with making these kinds of movements, so each movement becomes more similar to the last.

The authors demonstrate this with some nice experiments using a redundant movement task – rather than moving a cursor to a target as in standard motor control tasks, participants had to move a horizontal bar up the screen to a horizontal bar target. The key thing is that it was only the vertical movement that made the bar move; horizontal movements had no effect. In the first experiment, participants initially reached to the bar before being passively moved by a robotic system in one of two directional tilts (left or right) and were then allowed to move by themselves again. The results are below (Figure 1 in the paper):


Redundant reaching task

You can see that after the passive movement was applied, the overall angle changed depending on whether it was to the left (blue) or right (red). Remember that the tilt was across the task-redundant (horizontal) dimension, so it didn’t cause errors in the task at all! Despite this, participants continued to reach in the way that they’d been forced to do after the passive movement was finished – demonstrating use-dependent learning.

To follow this up, the authors did two more experiments. The first showed that error-based and use-dependent learning are separate processes and occur at the same time. They used a similar task but this time rather than a passive movement participants made active reaches in a left- or right-tilting ‘force channel’. This time the initial angle results showed motor aftereffects that reflected error-based learning, while the overall angle showed similar use-dependent effects as in the first experiment.

Finally they investigated use-dependent learning in a perturbation study. As participants moved the bar toward the target they had to fight against a horizontal force that was proportional to their velocity (i.e. it got bigger as they went faster). Compared to a ‘standard’ perturbation study (a reach to a target, where participants could see their horizontal error) the horizontal errors weren’t corrected after learning. However, the initial movement directions in the redundant task were in the direction of the force field – meaning that as participants learnt the task the planned movement direction changed through use-dependent learning.

I think this is a really cool idea. Most studies focus on error as the sole basis for driving motor learning, but thinking about use-dependent learning makes sense because of what we know about how the brain makes connections through something called Hebbian learning. Basically, though an oversimplification: ‘what fires together, wires together’, which means that connections tend to strengthen if they are used a lot and weaken if they are not. So it seems reasonable (to me at least!) that if you make a movement, you’re more likely to make another one like it than come up with a new solution.

It also might explain something about optimal feedback control that I’ve been thinking about for a while since seeing some work from Paul Gribble’s lab: we often talk about the motor system minimizing the energy required to perform a reach, but their work has shown pretty conclusively that the motor system prefers straight reaches even if the minimum energy path is decidedly not straight. There must therefore be some top-down mechanism that prioritises ‘straightness’ in the motor system, even if it’s not the most ‘optimal’ strategy for the task at hand.

Lots to chew over and think about here. I haven’t even covered the modelling work the authors did, but it’s pretty nice.

---

Diedrichsen J, White O, Newman D, & Lally N (2010). Use-dependent and error-based learning of motor behaviors. Journal of Neuroscience, 30 (15), 5159-66 PMID: 20392938

Image copyright © 2010 Diedrichsen, White, Newman & Lally

Monday, 28 June 2010

I am giving up science

No paper today, because I’ve had a fundamental rethink of my life and my priorities thanks to the august wisdom of Simon Jenkins in the Guardian.

I mean, I’ve spent the last seven years of my life learning all that the entire human race knows about how the brain controls the body. I’ve made the effort to learn technical skills, time management, writing, critical thinking and how to argue my case clearly and effectively based on sound empirical evidence. I have learnt to present my work in formats understandable by experts and non-experts alike (content of this blog notwithstanding; I do a much better job of it in the pub).

No longer shall I test people in robotic behavioural experiments and measure their muscle activity in an attempt to tease out the intricacies of how we perform complex actions. No longer shall I write computational modelling code that might give us a fundamental understanding of the neural activity that gives rise to these movements. And thus, no longer will I stay at my obviously hideously overpaid postdoc, worshipping at the altar of Big Science.

No longer! Thanks to Jenkins’ shining example, it is now clearly evident to me that I can not only make a decent living by spouting off seemingly randomly on things I know nothing about, but that I can do so with only a tenuous connection to the facts and a seeming obliviousness to my own inherent biases. (Of course, had I been paying more attention rather than clicking little pieces of graphs to mark onset and offset points of reaching movements for hours on end I would have realized that the existence of daytime TV hosts makes this intuitively obvious.)

No longer. I’ve decided to completely change my life from this point hence, give up the clearly pointless the intellectual rigour involved in trying to figure stuff out, and take a job in a large financial firm that will of course be entirely exempt from the pain being inflicted on the public sector by arrogant, libertarian-minded right-wing deficit-hawk idiots. Um, I mean the Government.

---

This article is a spoof. Any comments about Simon Jenkins that might be considered to border on the libellous totally aren’t. That’s how you do these legal disclaimers, right? Well he can sue me if he wants, I don’t own anything anyway.

Here is the article that started it all, and here is the article that inspired me to write something about it. Normal service will be resumed on Wednesday.

Also: I'm not going to make it a habit to write about politics here, but you may have gathered that I'm a bit of a lefty. Whoops, cover blown...

Friday, 25 June 2010

You're only allowed one left hand

ResearchBlogging.orgIn previous posts I’ve asked how we know where our hands are and how we combine information from our senses. Today’s paper covers both of these topics, and investigates the deeper question of how we incorporate this information into our representation of the body.

Body representation essentially splits into two parts: body image and body schema. Body image is how we think about our body, how we see ourselves; disorders in body image can lead to anorexia or myriad other problems. Body schema, on the other hand, is how our brain keeps track of the body, below the conscious level, so that when we reach for a glass of water we know where we are and how far to go. There’s some fascinating work on body ownership and embodiment but you can read about that in the paper, as it’s open access!

The study is based on a manipulation of the rubber hand illusion, a very cool perceptual trick that’s simple to perform. First, find a rubber hand (newspaper inside a rubber glove works well). Second, get a toothbrush, paintbrush, or anything else that can be used to produce a stroking sensation. Third, sit your experimental participant down and stroke a finger on the rubber hand while simultaneously stroking the equivalent finger on the participant’s actual hand (make sure they can’t see it!). These strokes MUST be synchronous, i.e. applied with the same rhythm. The result, after a little while, is that the participant starts to fell like the rubber hand is actually their hand! It’s a really fun effect.

There are of course limitations of the rubber hand illusion – a fake static hand isn’t the best thing for eliciting illusions of body representation, as it’s obviously fake, no matter how much you think the hand is yours. Plus it’s hard to do movement studies with static hands. The researchers got around this problem by using a camera/projection system to record an image of their participant’s hand and playing it back in real time. They got their participants to actively stroke a toothbrush rather than having the stroking passively applied to them, and then showed two images of their hand to the left and right of the actual (unseen) hand position.

The left, right or both hands were shown synchronously stroking; the other hand in the first two conditions was shown asynchronously stroking by delaying the feedback from the camera. The researchers asked through questionnaires whether participants felt they ‘owned’ each hand. You can see these results in the figure below (Figure 3B in the paper):

Ownership rating by hand stroke condition

For the left-stroke (LS) and right-stroke (RS) conditions, only the left or right image respectively was felt to be ‘owned’ whereas in the both-stroke (BS) condition, both hands were felt to be ‘owned’. This result isn’t too surprising; it’s a nice strong replication of the rubber hand results other researchers have found. Where it gets interesting is that when participants were asked to make reaches to a target in front of them they tended to reach in the right-stroke and left-stroke conditions as if the image of the hand they felt they ‘owned’ was actually theirs. That is, they made pointing errors consistent with what you would see if their real hand had been in the location of the image.

In a final test, participants in the both-stroke condition were asked to reach to a target in the presence of distractors to its left and right. Usually people will attempt to avoid distractors, even when it’s just an image or a dot that they are moving around a screen, and the distractors are just lights. However in this case participants had no qualms about moving one of the images through the distractors to reach the target with the other, even though they claimed ‘ownership’ of both.

This last point leads to an interesting idea the authors explore in the discussion section. While it seems to be possible to incorporate two hands simultaneously into the body image, this doesn’t appear to translate to the body schema. So you might be able to imagine yourself with extra limbs, but when it comes to actively move them the motor system seems to pick one and go with that, ignoring the other one (even when it hits an obstacle).

To my mind this is probably a consequence of the brain learning over many years how many limbs it has and how to move them efficiently, and any extra limbs it may appear to have at the moment can be effectively discounted. It is interesting to see how quickly the schema can adapt to apparent changes in a single limb however, as shown by the pointing errors in the RS and LS movement tasks.

I wonder if we were born with more limbs, would we learn gradually how to control them all over time? After all, octopuses manage it. Would we still see a hand dominance effect? (I’m not sure if octopuses show arm dominance!) And would we, when a limb was lost in an accident, still experience the ‘phantoms’ that amputees report? I haven’t touched on phantoms this post, but I’m sure I’ll return to them at some point.

Altogether a simple but interesting piece of work, which raises lots of interesting questions, like good science should. (Disclaimer: I know the first and third authors of this study from my time in Nottingham. That wouldn't stop me saying their work was rubbish if it was though!)

---

Newport, R., Pearce, R., & Preston, C. (2009). Fake hands in action: embodiment and control of supernumerary limbs Experimental Brain Research DOI: 10.1007/s00221-009-2104-y

Image copyright © 2009 Newport, Pearce & Preston

Wednesday, 23 June 2010

The cost of uncertainty

ResearchBlogging.orgBack from my girlfriend-induced hiatus and onto a really interesting paper published ahead of print in the Journal of Neurophysiology. This work asks some questions, and postulates some answers, very similar to the line of thinking I’ve been going down recently – which is, of course, the main reason I find it interesting! (The other reason is that they used parabolic flights. Very cool.)

One theory of how the brain performs complex movements in a dynamical environment – like, say, lifting objects – is known as optimal feedback control (OFC). The basic idea is that the brain makes movements that are optimized to the task constraints. For example, to lift an object, the control system might want to minimize the amount of energy used* and at the same time lift the object to a particular position. In OFC we combine these constraints into something called a cost function: how much the action ‘costs’ the system to perform. To optimize the movement, the system simply works to reduce the total cost.

But where does the system get information about the limb and the task from in the first place so as to optimize its control? There are two sources for knowledge about limb dynamics. The most obvious is reactive: feedback from the senses, from both vision and proprioception (the sense of where the arm is in space). But feedback takes a while to travel to the brain and so another source is needed: a predictive source of knowledge, an internal model of the task and limb dynamics. The predictive and reactive components can be combined in an optimal fashion to form an estimate of the state of the limb (i.e. where it is and how fast it’s going). This ‘state estimate’ can then be used to calculate the overall cost of the movement.

In today’s paper the authors argue that at the start of a new task, a new internal model has to be learnt, or an old one modified, to deal with the new task demands. So far so uncontroversial. What’s new here is the claim that the cost function being optimized for actually changes when dealing with a new task – because there is higher uncertainty in the internal prediction so the system is temporarily more reliant on feedback. They have some nice data and models to back up their conclusion.

The task was simple: participants had to grip a block and move it up or down from a central position while their position and grip force was recorded. After they’d learnt the task at normal gravity, they had to perform it in microgravity during a parabolic flight, which essentially made their arm and the object weightless. Their grip force increased markedly even though they now had a weightless object, and kinematic (e.g. position, velocity) measures changed too; movements took more time, and the peak acceleration was lower. Over the course of several trials the grip force decreased again as participants learnt the task. You can see some representative kinematic data in the figure below (Figure 4 in the paper):

Kinematic data from a single participant


Panels A-D show the average movement trace of one participant in normal (1 g) and microgravity (0 g) conditions, while panels E and F show the changes in acceleration and movement time respectively. The authors argue that the grip force changes at the beginning of the first few trials point towards uncertainty in the internal prediction, which results in the altered kinematics.

To test this idea, they ran a simulation based on a single-joint model of the limb using OFC and the optimal combination of information from the predictive system and sensory feedback. What they varied in this model was the noise, and thus the reliability, in the predictive system. The idea was that as the prediction became less reliable, the kinematics should change to reflect more dependence on the sensory feedback. But that's not quite what happened, as you can see from the figure below (Figure 8 in the paper):

Data and simulation results


Here the graphs show various kinematic parameters. In black and grey are the mean data points from all the participants for the upward and downward movements. The red squares show the parameters the simulation came up with when noise was injected into the prediction. As you can see, they're pretty far off! So what was the problem? Well, it seems that you need to change not only the uncertainty of the prediction but also the cost function that is being optimized. The blue diamonds show what happens when you manipulate the cost function (by increasing the parameter shown as alpha); suddenly the kinematics are much closer to the way people actually perform.

Thus, the conclusion is that when you have uncertainty in your predictive system, you actually change your cost function while you're learning a new internal model. I find this really interesting because it's a good piece of evidence that uncertainty in the predictive system feeds into the selection of a new cost function for a movement, rather than the motor system just sticking with the old cost function and continuing to bash away.

It's a nice paper but I do wonder, why did the authors go to all the trouble of using parabolic flights to get the data here? If what they're saying is true and any uncertainty in the internal model/predictive system is enough to make you change your cost function, this experiment could have been done much more simply – and for much longer than the 30 trials they were able to do under microgravity – by just using a robotic system. Perhaps they didn't have access to one, but even so it seems a bit of overkill to spend money on parabolic flights which are so limited in duration.

Overall though it's a really fun paper with some interesting and thought-provoking conclusions.

*To be precise there is some evidence that it's not the amount of energy used that gets minimized, but the size of the motor command itself (because a bigger command has more variability due to something called signal-dependent noise... I'm not going to go into that though!).

---

Crevecoeur, F., McIntyre, J., Thonnard, J., & Lefevre, P. (2010). Movement Stability under Uncertain Internal Models of Dynamics Journal of Neurophysiology DOI: 10.1152/jn.00315.2010

Images copyright © 2010 The American Physiological Society