Wednesday 30 June 2010

Errors and use both contribute to learning

ResearchBlogging.orgLearning how to make a reaching movement is, as I’ve said before, a very hard problem. There are so many muscles in the arm and so many ways we can get from one point to another that there are for all intents and purposes an infinite set of ways the brain could choose to send motor commands to achieve the same goal. And yet what we see consistently from people is a very stereotyped kind of movement.

How do we learn to make reaching movements in the presence of destabilizing perturbations? The standard way of thinking about this assumes that if you misreach, your motor system will notice the error and get better next time, whether it’s through recalibration of the sensory system or through a new cognitive strategy to better achieve the goal. But this paper from Diedrichsen et al. (2010) postulates another learning mechanism than error-based learning: something they call use-dependent learning.

The basic idea is that if you’re performing a task, like reaching to an object straight ahead, and you’re constantly getting pushed off to the side, you’ll correct for these sideways perturbations using error-based learning. But you’re also learning to make movements in the non-perturbed direction, and the more you make these movements the more experience you have with making these kinds of movements, so each movement becomes more similar to the last.

The authors demonstrate this with some nice experiments using a redundant movement task – rather than moving a cursor to a target as in standard motor control tasks, participants had to move a horizontal bar up the screen to a horizontal bar target. The key thing is that it was only the vertical movement that made the bar move; horizontal movements had no effect. In the first experiment, participants initially reached to the bar before being passively moved by a robotic system in one of two directional tilts (left or right) and were then allowed to move by themselves again. The results are below (Figure 1 in the paper):


Redundant reaching task

You can see that after the passive movement was applied, the overall angle changed depending on whether it was to the left (blue) or right (red). Remember that the tilt was across the task-redundant (horizontal) dimension, so it didn’t cause errors in the task at all! Despite this, participants continued to reach in the way that they’d been forced to do after the passive movement was finished – demonstrating use-dependent learning.

To follow this up, the authors did two more experiments. The first showed that error-based and use-dependent learning are separate processes and occur at the same time. They used a similar task but this time rather than a passive movement participants made active reaches in a left- or right-tilting ‘force channel’. This time the initial angle results showed motor aftereffects that reflected error-based learning, while the overall angle showed similar use-dependent effects as in the first experiment.

Finally they investigated use-dependent learning in a perturbation study. As participants moved the bar toward the target they had to fight against a horizontal force that was proportional to their velocity (i.e. it got bigger as they went faster). Compared to a ‘standard’ perturbation study (a reach to a target, where participants could see their horizontal error) the horizontal errors weren’t corrected after learning. However, the initial movement directions in the redundant task were in the direction of the force field – meaning that as participants learnt the task the planned movement direction changed through use-dependent learning.

I think this is a really cool idea. Most studies focus on error as the sole basis for driving motor learning, but thinking about use-dependent learning makes sense because of what we know about how the brain makes connections through something called Hebbian learning. Basically, though an oversimplification: ‘what fires together, wires together’, which means that connections tend to strengthen if they are used a lot and weaken if they are not. So it seems reasonable (to me at least!) that if you make a movement, you’re more likely to make another one like it than come up with a new solution.

It also might explain something about optimal feedback control that I’ve been thinking about for a while since seeing some work from Paul Gribble’s lab: we often talk about the motor system minimizing the energy required to perform a reach, but their work has shown pretty conclusively that the motor system prefers straight reaches even if the minimum energy path is decidedly not straight. There must therefore be some top-down mechanism that prioritises ‘straightness’ in the motor system, even if it’s not the most ‘optimal’ strategy for the task at hand.

Lots to chew over and think about here. I haven’t even covered the modelling work the authors did, but it’s pretty nice.

---

Diedrichsen J, White O, Newman D, & Lally N (2010). Use-dependent and error-based learning of motor behaviors. Journal of Neuroscience, 30 (15), 5159-66 PMID: 20392938

Image copyright © 2010 Diedrichsen, White, Newman & Lally

Monday 28 June 2010

I am giving up science

No paper today, because I’ve had a fundamental rethink of my life and my priorities thanks to the august wisdom of Simon Jenkins in the Guardian.

I mean, I’ve spent the last seven years of my life learning all that the entire human race knows about how the brain controls the body. I’ve made the effort to learn technical skills, time management, writing, critical thinking and how to argue my case clearly and effectively based on sound empirical evidence. I have learnt to present my work in formats understandable by experts and non-experts alike (content of this blog notwithstanding; I do a much better job of it in the pub).

No longer shall I test people in robotic behavioural experiments and measure their muscle activity in an attempt to tease out the intricacies of how we perform complex actions. No longer shall I write computational modelling code that might give us a fundamental understanding of the neural activity that gives rise to these movements. And thus, no longer will I stay at my obviously hideously overpaid postdoc, worshipping at the altar of Big Science.

No longer! Thanks to Jenkins’ shining example, it is now clearly evident to me that I can not only make a decent living by spouting off seemingly randomly on things I know nothing about, but that I can do so with only a tenuous connection to the facts and a seeming obliviousness to my own inherent biases. (Of course, had I been paying more attention rather than clicking little pieces of graphs to mark onset and offset points of reaching movements for hours on end I would have realized that the existence of daytime TV hosts makes this intuitively obvious.)

No longer. I’ve decided to completely change my life from this point hence, give up the clearly pointless the intellectual rigour involved in trying to figure stuff out, and take a job in a large financial firm that will of course be entirely exempt from the pain being inflicted on the public sector by arrogant, libertarian-minded right-wing deficit-hawk idiots. Um, I mean the Government.

---

This article is a spoof. Any comments about Simon Jenkins that might be considered to border on the libellous totally aren’t. That’s how you do these legal disclaimers, right? Well he can sue me if he wants, I don’t own anything anyway.

Here is the article that started it all, and here is the article that inspired me to write something about it. Normal service will be resumed on Wednesday.

Also: I'm not going to make it a habit to write about politics here, but you may have gathered that I'm a bit of a lefty. Whoops, cover blown...

Friday 25 June 2010

You're only allowed one left hand

ResearchBlogging.orgIn previous posts I’ve asked how we know where our hands are and how we combine information from our senses. Today’s paper covers both of these topics, and investigates the deeper question of how we incorporate this information into our representation of the body.

Body representation essentially splits into two parts: body image and body schema. Body image is how we think about our body, how we see ourselves; disorders in body image can lead to anorexia or myriad other problems. Body schema, on the other hand, is how our brain keeps track of the body, below the conscious level, so that when we reach for a glass of water we know where we are and how far to go. There’s some fascinating work on body ownership and embodiment but you can read about that in the paper, as it’s open access!

The study is based on a manipulation of the rubber hand illusion, a very cool perceptual trick that’s simple to perform. First, find a rubber hand (newspaper inside a rubber glove works well). Second, get a toothbrush, paintbrush, or anything else that can be used to produce a stroking sensation. Third, sit your experimental participant down and stroke a finger on the rubber hand while simultaneously stroking the equivalent finger on the participant’s actual hand (make sure they can’t see it!). These strokes MUST be synchronous, i.e. applied with the same rhythm. The result, after a little while, is that the participant starts to fell like the rubber hand is actually their hand! It’s a really fun effect.

There are of course limitations of the rubber hand illusion – a fake static hand isn’t the best thing for eliciting illusions of body representation, as it’s obviously fake, no matter how much you think the hand is yours. Plus it’s hard to do movement studies with static hands. The researchers got around this problem by using a camera/projection system to record an image of their participant’s hand and playing it back in real time. They got their participants to actively stroke a toothbrush rather than having the stroking passively applied to them, and then showed two images of their hand to the left and right of the actual (unseen) hand position.

The left, right or both hands were shown synchronously stroking; the other hand in the first two conditions was shown asynchronously stroking by delaying the feedback from the camera. The researchers asked through questionnaires whether participants felt they ‘owned’ each hand. You can see these results in the figure below (Figure 3B in the paper):

Ownership rating by hand stroke condition

For the left-stroke (LS) and right-stroke (RS) conditions, only the left or right image respectively was felt to be ‘owned’ whereas in the both-stroke (BS) condition, both hands were felt to be ‘owned’. This result isn’t too surprising; it’s a nice strong replication of the rubber hand results other researchers have found. Where it gets interesting is that when participants were asked to make reaches to a target in front of them they tended to reach in the right-stroke and left-stroke conditions as if the image of the hand they felt they ‘owned’ was actually theirs. That is, they made pointing errors consistent with what you would see if their real hand had been in the location of the image.

In a final test, participants in the both-stroke condition were asked to reach to a target in the presence of distractors to its left and right. Usually people will attempt to avoid distractors, even when it’s just an image or a dot that they are moving around a screen, and the distractors are just lights. However in this case participants had no qualms about moving one of the images through the distractors to reach the target with the other, even though they claimed ‘ownership’ of both.

This last point leads to an interesting idea the authors explore in the discussion section. While it seems to be possible to incorporate two hands simultaneously into the body image, this doesn’t appear to translate to the body schema. So you might be able to imagine yourself with extra limbs, but when it comes to actively move them the motor system seems to pick one and go with that, ignoring the other one (even when it hits an obstacle).

To my mind this is probably a consequence of the brain learning over many years how many limbs it has and how to move them efficiently, and any extra limbs it may appear to have at the moment can be effectively discounted. It is interesting to see how quickly the schema can adapt to apparent changes in a single limb however, as shown by the pointing errors in the RS and LS movement tasks.

I wonder if we were born with more limbs, would we learn gradually how to control them all over time? After all, octopuses manage it. Would we still see a hand dominance effect? (I’m not sure if octopuses show arm dominance!) And would we, when a limb was lost in an accident, still experience the ‘phantoms’ that amputees report? I haven’t touched on phantoms this post, but I’m sure I’ll return to them at some point.

Altogether a simple but interesting piece of work, which raises lots of interesting questions, like good science should. (Disclaimer: I know the first and third authors of this study from my time in Nottingham. That wouldn't stop me saying their work was rubbish if it was though!)

---

Newport, R., Pearce, R., & Preston, C. (2009). Fake hands in action: embodiment and control of supernumerary limbs Experimental Brain Research DOI: 10.1007/s00221-009-2104-y

Image copyright © 2009 Newport, Pearce & Preston

Wednesday 23 June 2010

The cost of uncertainty

ResearchBlogging.orgBack from my girlfriend-induced hiatus and onto a really interesting paper published ahead of print in the Journal of Neurophysiology. This work asks some questions, and postulates some answers, very similar to the line of thinking I’ve been going down recently – which is, of course, the main reason I find it interesting! (The other reason is that they used parabolic flights. Very cool.)

One theory of how the brain performs complex movements in a dynamical environment – like, say, lifting objects – is known as optimal feedback control (OFC). The basic idea is that the brain makes movements that are optimized to the task constraints. For example, to lift an object, the control system might want to minimize the amount of energy used* and at the same time lift the object to a particular position. In OFC we combine these constraints into something called a cost function: how much the action ‘costs’ the system to perform. To optimize the movement, the system simply works to reduce the total cost.

But where does the system get information about the limb and the task from in the first place so as to optimize its control? There are two sources for knowledge about limb dynamics. The most obvious is reactive: feedback from the senses, from both vision and proprioception (the sense of where the arm is in space). But feedback takes a while to travel to the brain and so another source is needed: a predictive source of knowledge, an internal model of the task and limb dynamics. The predictive and reactive components can be combined in an optimal fashion to form an estimate of the state of the limb (i.e. where it is and how fast it’s going). This ‘state estimate’ can then be used to calculate the overall cost of the movement.

In today’s paper the authors argue that at the start of a new task, a new internal model has to be learnt, or an old one modified, to deal with the new task demands. So far so uncontroversial. What’s new here is the claim that the cost function being optimized for actually changes when dealing with a new task – because there is higher uncertainty in the internal prediction so the system is temporarily more reliant on feedback. They have some nice data and models to back up their conclusion.

The task was simple: participants had to grip a block and move it up or down from a central position while their position and grip force was recorded. After they’d learnt the task at normal gravity, they had to perform it in microgravity during a parabolic flight, which essentially made their arm and the object weightless. Their grip force increased markedly even though they now had a weightless object, and kinematic (e.g. position, velocity) measures changed too; movements took more time, and the peak acceleration was lower. Over the course of several trials the grip force decreased again as participants learnt the task. You can see some representative kinematic data in the figure below (Figure 4 in the paper):

Kinematic data from a single participant


Panels A-D show the average movement trace of one participant in normal (1 g) and microgravity (0 g) conditions, while panels E and F show the changes in acceleration and movement time respectively. The authors argue that the grip force changes at the beginning of the first few trials point towards uncertainty in the internal prediction, which results in the altered kinematics.

To test this idea, they ran a simulation based on a single-joint model of the limb using OFC and the optimal combination of information from the predictive system and sensory feedback. What they varied in this model was the noise, and thus the reliability, in the predictive system. The idea was that as the prediction became less reliable, the kinematics should change to reflect more dependence on the sensory feedback. But that's not quite what happened, as you can see from the figure below (Figure 8 in the paper):

Data and simulation results


Here the graphs show various kinematic parameters. In black and grey are the mean data points from all the participants for the upward and downward movements. The red squares show the parameters the simulation came up with when noise was injected into the prediction. As you can see, they're pretty far off! So what was the problem? Well, it seems that you need to change not only the uncertainty of the prediction but also the cost function that is being optimized. The blue diamonds show what happens when you manipulate the cost function (by increasing the parameter shown as alpha); suddenly the kinematics are much closer to the way people actually perform.

Thus, the conclusion is that when you have uncertainty in your predictive system, you actually change your cost function while you're learning a new internal model. I find this really interesting because it's a good piece of evidence that uncertainty in the predictive system feeds into the selection of a new cost function for a movement, rather than the motor system just sticking with the old cost function and continuing to bash away.

It's a nice paper but I do wonder, why did the authors go to all the trouble of using parabolic flights to get the data here? If what they're saying is true and any uncertainty in the internal model/predictive system is enough to make you change your cost function, this experiment could have been done much more simply – and for much longer than the 30 trials they were able to do under microgravity – by just using a robotic system. Perhaps they didn't have access to one, but even so it seems a bit of overkill to spend money on parabolic flights which are so limited in duration.

Overall though it's a really fun paper with some interesting and thought-provoking conclusions.

*To be precise there is some evidence that it's not the amount of energy used that gets minimized, but the size of the motor command itself (because a bigger command has more variability due to something called signal-dependent noise... I'm not going to go into that though!).

---

Crevecoeur, F., McIntyre, J., Thonnard, J., & Lefevre, P. (2010). Movement Stability under Uncertain Internal Models of Dynamics Journal of Neurophysiology DOI: 10.1152/jn.00315.2010

Images copyright © 2010 The American Physiological Society

Apologies

My apologies for not updating at the end of last week and the beginning of this week. I do however have a very good excuse: my girlfriend came to visit me from England, so rather than scouring the literature for interesting papers and writing pithy blog posts about them I spent the time eating in restaurants and going sailing. And suchlike.

Normal service will be resumed later today!

Wednesday 16 June 2010

Where you look affects your judgement

ResearchBlogging.orgOur ability to successfully interact with the environment is key to our survival. Much of my work involves figuring out how the brain sends the correct commands to the upper limb that allow us to control it and reach for objects around us. Considering how complex the musculature of the arm is, and how ever-changing the world is around us, this is a non-trivial task. One fundamental question that needs to be solved by the brain’s control system is: How do you know where something is relative to your hand?

It’s no good sending a complex set of commands to reach for an object if you don’t know how to relate where your hand is right now to where the object is. There are several theories as to how the brain might perform this task. In one theory, the object’s location on the retina is translated into body-centred coordinates (i.e. where it is in location to the body centre) by adding the eye position and the head position sequentially. In another, the object is stored in a gaze-centred reference frame that has to be recalculated after every movement.

There’s already some evidence for the second account – we tend to overestimate how far in our peripheral vision a target sits, and so we actually make pointing errors when asked to reach to where we thought it was. So it seems as if we dynamically update our estimate of where a target is when we are asked to make active movements towards them. In this paper the researchers were interested in whether this was also true for perceptual estimates. That is, when you are simply asked to state the position of a remembered target, does that also depend on gaze shift?

To answer this question, the authors performed an experiment with two different kinds of targets: visual and proprioceptive. (If you’ve been paying attention, you’ll know that proprioception is the sense of where your body is in space.) The visual target was just an LED set out in front of the participant; the proprioceptive target was the participant’s own unseen hand moved through space by a robot. Before the target appeared, participants were asked to look at an LED either straight in front of them, or 15˚ to the left or right. The targets would then appear (or the hand would be moved to the target location), disappear (or the hand would be moved back), and then the participant’s hand would be moved out again to a comparison location. They then had to judge whether their current hand location was to the left or right of the remembered target.

Here’s where it gets interesting. Participants were placed into one of two conditions: static or dynamic. In the static condition, participants kept their gaze fixed on an LED to the left, to the right or straight ahead of their body midline. In the dynamic condition, they gazed straight ahead and were asked to move their eyes to the left or right LED after the target had disappeared. In a gaze-dependent system, this should introduce errors as the target location relative to the hand would be updated relative to gaze after the eye movement. In a gaze-independent system, no errors should be evident as the target position was already calculated before the eye movement.

Bias in judgements of visual and proprioceptive targets

The figure above (Figure 4a and 4b in the paper) shows the basic results. Grey is the right fixation while black is the left fixation; circles show the static condition while crosses show the dynamic condition. You can immediately see that in both conditions, for both targets, participants made estimation errors in the opposite direction to their gaze: errors to the left for right gaze, and errors to the right for left gaze. So it does look like perceptual judgements are coded and updated in a gaze-centred reference frame. To hammer home their point, the next figure (Figure 5 in the paper) shows the similarity between the judgements in the static and dynamic conditions:

Static vs. dynamic bias

As you can see, the individual judgements match up very closely indeed, which gives even more weight to the gaze-centred account.

So what does this mean? Well: it means that whenever you move your eyes, whether you are planning an action or not, your brain’s estimation of where objects are in space relative to your limbs is remapped. The reason that the errors this generates don’t affect your everyday life is that usually when you want to reach for an object you will look directly at it anyway, which eliminates the problems of estimating the position of objects on the periphery of your vision.

I enjoyed reading this paper – and there is much more in there about how the findings relate to other work in the literature – but it was a bit wordy and hard to get through at times. One of the most difficult things about writing, I’ve found, is to try and maintain the balance between being concise and containing enough information so that the result isn’t distorted. Time will tell how I manage that on this blog!

---

Fiehler, K., Rösler, F., & Henriques, D. (2010). Interaction between gaze and visual and proprioceptive position judgements Experimental Brain Research, 203 (3), 485-498 DOI: 10.1007/s00221-010-2251-1

Images copyright © 2010 Springer-Verlag

Monday 14 June 2010

How the brain controls the non-body

ResearchBlogging.orgWhen asked about my research area by people in the pub (this happens probably more than it should, most likely due to the disproportionate amount of time I spend there) I usually reply that I work on motor control, or ‘how the brain controls the body’. Today’s paper by Ganguly and colleagues looks at how the brain can control things without a body. There are some very cool results here.

The field of neuroprosthetics, or the control of prosthetic devices by brain activity, is a rapidly emerging one. It’s been previously shown that monkeys and humans can learn to control an on-screen cursor through the power of the mind alone, as their brain activity is either directly or indirectly measured and fed through a decoder that transforms the activity into cursor movement. One question we don’t yet know the answer to is: what is the nature of the neural activity that is decoded?

Ganguly et al. set out to answer this question. They trained two monkeys on a reach-to-targets task while recording the activity of a number of neurons. They found that a small group of these neurons were very stable across several days to weeks – that is, each individual neuron fired in the same way on day 19 as it did on day 1, for example. The researchers then used these stable neurons as inputs to their decoder. The figure below (Figure 2 in the paper) shows how the monkeys improved with time:




Improvement over time

It’s quite a complicated figure, but pretty straightforward to work out. Panel A shows the error rate (top) and time to reach the target (bottom) for one of the monkeys (the other is shown in the red inset). You can see that they get better and faster at hitting the target. Panel B shows that the monkeys get better as they make more reaches in a session on days 2 and 4, but by day 6 their performance is basically at a plateau. Panel C shows performance in the first 5 minutes of the task – note that after a few days their error rate drops significantly. Panel D shows the actual cursor movements they’re making, and how those movements get more correlated (i.e. more similar to one another) as time goes on.

The researchers also note that stable task performance is associated with the stabilization of the neural tuning properties – as the preferred directions of the neurons (the directions in which they fire the most) get more similar across days, performance increases.

So this is pretty interesting – a stable subset of neurons can be used to control an onscreen cursor. Where I think it gets cooler are some of the other questions they ask about this stable neural population. What happens if you remove certain neurons from the subset being fed into the decoder – that is, is the whole group being used to control the cursor, or just a subset of the subset? Here’s your answer (Figure 5 in the paper):

Performance relative to number of neurons dropped

Yup, performance drops significantly as you remove neurons. But, crucially, it isn’t destroyed with the loss of one or two, meaning that if a neuron dies, for example, it’s not going to radically change your performance. And since the brain is always learning and changing, other neurons are likely recruited to fill the gap.

There’s more in here, but one of the most interesting things is the ability to learn new decoders alongside old ones, and recall them rapidly when necessary, and also that the stable activity patterns emerge very early in each trial. This finding is very awesome because it parallels something I’m interested in for my own research: the idea of internal models in the brain that are used for certain tasks and can be switched between when necessary. So the same set of neurons can theoretically be used to perform different tasks, as long as the internal model ‘knows’ how to interpret their firing for each task.

As I said, incredibly cool stuff, and it means that we are another step closer to understanding how brain activity controls prosthetic limbs – and, of course, our real, natural limbs as well.

By the way, the paper is open access so you can read it yourself (if you don’t have an institutional subscription) via the link at the bottom.

---

Ganguly, K., & Carmena, J. (2009). Emergence of a Stable Cortical Map for Neuroprosthetic Control PLoS Biology, 7 (7) DOI: 10.1371/journal.pbio.1000153

Images copyright © 2009 Ganguly & Carmena

Friday 11 June 2010

Moving generally onward

ResearchBlogging.orgThink of a pianist learning how to play a sequence of chords on the piano in one position, and then playing the same sequence of chords three octaves higher. Her arms and hands will be in different positions relative to her trunk, but she’ll still be able to play the same notes. We call this ability to transfer learnt motor skills from one part of the workspace to another generalization.

In today’s paper, the authors investigated how generalization works when you are learning two things at the same time, in different areas of space. The observation method they chose was amplitude gains - reaching to a target in a particular direction and modifying the feedback to increase or reduce the gain. So, for example, for a gain of 1.5 participants would have to reach 1.5 times further than normal to hit the target, and for a gain of 0.5 they would have to reach half as far as normal.

The researchers trained their participants on two gains (1.5 and 0.8) simultaneously for two different targets, and then tested how the reaches generalized to some untrained targets:


Trained and untrained targets


The thick circles in the figure show the trained targets and the thin circles show the untrained targets. How the participants reached to the untrained targets after training on the trained targets can be used as a measure of how well they generalized their movements.

One obvious problem with generalization when learning two things at once is that the two generalization patterns might conflict, and prevent you learning one of the gains at all. But the results weren’t that simple. The participants quite happily learnt both gains, and their generalization varied smoothly based on distance from the training directions. The result is illustrated by this rather complex-looking graph:


Generalization based on target direction


Don’t be put off though. Just look at the thick black trace, which is the average of all the other black traces. Along the x-axis of the graph is direction in degrees, and along the y-axis is the observed gain, i.e. how far participants reached to the target at that particular position. You can see that at the trained targets at 60˚ (gain 0.8) and 210˚ (gain 1.5) the observed gain is close to the training gain, and as I said above, it varies smoothly between the two as you look at the different untrained targets.

So it’s possible to learn two gains at once, and the amount you generalize varies across the workspace in a smooth way. But scientists aren’t scientists if they’re satisfied with a simple answer. They wanted to know: why’s that? What’s the best model that explains the data, and that is consistent with what we know about the brain? The authors proposed five possible models, but the one they found fit the data best was a relative spatial weighting model.

The idea behind this model is fairly simple. We can quite easily find a generalization pattern from a single gain, and this model combines the two single-gain patterns based on the relative distance between the two training directions.

What does this mean? Well: it gives credence to the idea that the motor system adapts to differing visuomotor gains using something called a ‘mixture-of-experts’ system. Each ‘expert’ module learns one of the gains, and then combines them based on an easily-assessed property of the workspace (in this case, the angular distance between training targets). This modular idea of how the brain works has grown in popularity in the last decade, and this paper is the latest to demonstrate that there appear to be distinct systems that learn to be extremely good at one thing and are then combined and weighted together to deal with complex tasks.

That’s it for this week! Today’s post was under 700 words, which beats the first (~950) and the second (~1150!). I’m going to try to keep them shorter rather than longer, but I could do with some feedback on my writing. Comments very welcome.

---

Pearson, T., Krakauer, J., & Mazzoni, P. (2010). Learning Not to Generalize: Modular Adaptation of Visuomotor Gain Journal of Neurophysiology, 103 (6), 2938-2952 DOI: 10.1152/jn.01089.2009

Images copyright © 2010 The American Physiological Society

Tuesday 8 June 2010

Mood, music and movement

ResearchBlogging.orgWe all know that music can have an effect on our mood (or, to use a mildly annoying linguistic contrivance, can affect our affect). And being in a better mood has been consistently shown to improve our performance on cognitive tasks, like verbal reasoning; the influence of serene music on such tasks is also known as the 'Mozart effect'. What's kind of interesting is that this Mozart effect has also been shown to be effective on motor tasks, like complex manual tracking.

In the last post I talked a bit about motor adaptation - recalibrating two sensory sources so that the overall percept matches up with the incoming information. Say you're reaching to a target under distorted vision, like wearing goggles with prisms in them that make it look like you're reaching further to the right than you actually are; this is known as a visual perturbation. When you reach forward, the sense of where you are in space (proprioception) sends signals to the brain telling you your arm's gone forward. However, the visual information you receive tells you you've gone right. Some recalibration is in order, and over the course of many reaches you gradually adapt your movements to match the two percepts up.

There are a couple of stages in motor adaptation. The first stage is very cognitive, when you realise something's wrong and you rapidly change your reaches to reduce the perceived error in your movement. The second stage is much less consciously directed, and involves learning to control your arm with the new signals you are receiving from vision and proprioception. When the prism goggles are removed, you experience what is known as a motor aftereffect: you will now be reaching leftwards, the opposite of what appeared to happen when you were originally given the prisms. Over the course of a few trials this aftereffect will decay as the brain shifts back to the old relationship between vision and proprioception.

All this is very interesting (to me at least!) but what does it have to do with music? Well, today's paper by Otmar Bock looks more closely at how the the Mozart effect affects motor systems by studying the influence of music on motor adaptation. The theory goes that if an increased mood can improve cognitive performance, then the first phase of motor adaptation should be facilitated. However, since motor aftereffects are not a conscious cognitive strategy but an unconscious motor recalibration they should not be affected by the change in mood.

To test this idea, Bock split the participants into three groups and played each group either serene, neutral* or sad music at the beginning of and throughout the experiment. Before listening to the music, after listening for a while and at the end of the study, participants indicated their mood by marking a sheet of paper. While listening to the music, the performed a motor adaptation task: they had to move a cursor to an on-screen target while the visual feedback of the cursor was rotated by 60º. They couldn't see their hand while they did this, so their visual and proprioceptive signals gave different information.

As expected, the music participants listened to affected their mood: the 'sad' group reported a lower emotional valence, i.e. more negative emotions, then the 'neutral' group, which reported a lower emotional valence than the 'serene' group. During the task, as generally happens during these adaptation tasks where the goal is visual (and of course vision is more reliable!), participants adapted their movements so as to reduce the visual error. The figure below (Figure 2 in the paper) shows this process for the three separate groups, where light grey shows the 'serene' group, mid grey shows the 'neutral' group and dark grey shows the 'sad' group:


Adaptation error by group

The first three episodes in the figure show the reaching error during normal unrotated trials (the baseline phase), then from episode 4 onwards the cursor is rotated, sending the error up high (the adaptation phase). The error then decreases for all three groups until episode 29, where the rotation is removed again - and now the error is reversed as participants reach the wrong way (the aftereffect phase). What's cool about this figure is that it shows no differences at all for the 'neutral' and 'sad' groups but there is an obvious difference in the 'serene' group: adaptation is faster for this group than the others. Also, when the rotation is removed, the aftereffects show no differences between the three groups.

So it does seem that being in a state of high emotional valence (a good mood) can improve performance on the cognitive stage of motor adaptation - and it seems that 'serene' music can get you there. And interestingly, mood appears to have no effect on the less cognitive aftereffect stage (though see below for my comments on this).

The two main, connected questions I have about these results from a neuroscience point of view are: 1. how does music affect mood? and 2. how does mood affect cognitive performance? A discussion of how music affects the brain is beyond the scope of this post (and my current understanding) but since the brain is a collection of neurons firing together in synchronous patterns it makes sense that this firing can be regulated by coordinated sensory input like music. Perhaps serene music makes the patterns fire more efficiently, and sad music depresses the coordination somewhat. I'm not sure, but if the answer is something like this then I'd like to know more.

There are still a couple of issues with the study though. Here are the data on emotional valence (Figure 1A in the paper):


Emotional valence by group at three different stages

What you can see here is that the emotional valence was the same before (baseline) and after (final) the study, and it's only after listening to the music for a while (initial) that the changes in mood are apparent. Does this mean then that as participants continued with the task that their mood levelled out, perhaps as they concentrated on the task more, regardless of the background music? Could this be the reason for the lack of difference in the aftereffect phase? After all, when a perturbation is removed participants will quickly notice something has changed and I would have thought that the cognitive processes would swing into gear again, like in the beginning of the adaptation phase.

Also, it's worth noting from the above figure that valence is not actually improved by serene music, but appears to decrease for neutral and sad music. So perhaps it is not that serene music makes us better at adapting, but that neutral/sad music makes us worse? There are more questions than answers in these data I feel.

Hmm. This was meant to be a shorter post than the previous one, but I'm not sure it is! Need to work on being concise, I feel...

*I'm not exactly sure what the neutral sound effect was as there's no link, but Bock states in the paper that it is "movie trailer sound 'coffeeshop' from the digital collection Designer Sound FX®"

---

Bock, O. (2010). Sensorimotor adaptation is influenced by background music Experimental Brain Research, 203 (4), 737-741 DOI: 10.1007/s00221-010-2289-0

Images copyright © 2010 Springer-Verlag

Friday 4 June 2010

Visual dominance is an unreliable hypothesis

ResearchBlogging.orgHow do we integrate our disparate senses into a coherent view of the world? We obtain information from many different sensory modalities simultaneously - sight, hearing, touch, etc. - and we use these cues to form a percept of the world around us. But what isn't well known yet is exactly how the brain accomplishes this non-trivial task.

For example, what happens if the information from two senses give differing results? How do you adapt and calibrate your senses so that the information you get from one (say, the visual slant of a surface) matches up with the other (the feeling of the surface slant)? In this paper, the investigators set out to answer this question by examining something called the visual dominance hypothesis.

The basic idea is that since we are so over-reliant on vision, it will take priority whenever something else conflicts with it. That is, if you get visual information alongside tactile (touch) information, you will tend to adapt your tactile sense rather than your vision to make the two match up. But here the authors present data and argue in favour of a different hypothesis: reliability-based adaptation, where the sensory modality with the lowest reliability will adapt the most. Thus in low-visibility situations, you become more reliant on touch, and vice-versa.

Two experiments are described in this paper: a cue-combination experiment and a cue-calibration experiment. The combination experiment measured the reliability of the sensory estimators, i.e. vision and touch. The calibration experiment was designed using the estimates from the combination study to test whether the visual dominance or reliability hypotheses best explained how the sensory system adapts.

In the combination experiment, participants had to reach out and touch a virtual slanting block in front of them, and then say whether they thought it was slanted towards or away from them. They received either visual or haptic feedback or both (i.e. they could see or touch the object, or both). The cool thing about the setup is that the amount of visual reliability could be varied independently of the amount of haptic reliability, which enabled the experimenters to find a decent visual-haptic reliability ratio for each participant for use in the calibration experiment. They settled on parameters that set the reliability of visual-haptic at 3:1 and 1:3, so either vision was three times as reliable as touch or the other way round.

Following this they tested their participants in the calibration study, which involved changing the discrepancy between the visual and haptic slants over a series of trials, using either high (3:1) or low (1:3) visual reliability. You can see the results in the figure below (Figure 4A in the paper):


Reliability-based vs. visual-dominance hypothesis

The magenta circles show the adaptation in the 3:1 case, while the purple squares show adaptation in the 1:3 case. The magenta and purple dotted lines show the prediction given in the reliability-based adaptation hypothesis (i.e. that the least reliable estimator will adapt), while the black dotted line shows the prediction given in the visual dominance hypothesis (i.e. that vision will never adapt). It's a nice demonstration that seems to show robust support for reliability-based adaptation and that the visual dominance hypothesis isn't supported by the data.

For me, it's actually not too surprising to read this result. There have been several papers that have showed reliability-based adaptation in vision and in other modalities, but the authors do a successful job in showing why their paper is different: partly because purely sensory responses are used to avoid contamination with motor adaptation, and partly because this is the first time that reliabilities have been explicitly measured and used to investigate sensory recalibration.

One thing I wonder about though is the variability in the graph above. For the 3:1 ratio (high visual reliability) the variability of responses is much lower than for the 1:3 ratio (low visual reliability). Since the entire point of the combination experiment was to determine the relative reliabilities of the different modalities for the calibration experiment, I would have expected the variability to be the same in both cases. As it is it looks a bit like vision is inherently more reliable than touch, even when the differences in reliability are supposedly taken into account. Maybe I'm wrong about this though, in which case I'd appreciate someone putting me right!

The authors also model the recalibration process but I'm not going to go into that in detail; suffice it to say that they found the reliability-based prediction is very good indeed as long as the estimators don't drift too much with respect to the measurement noise (i.e. the reliability of the estimator). If the drift is very large, the prediction tends to follow the drift instead of the reliability. I think a nice empirical follow-up would be to do a similar study that takes drift into account - proprioceptive drift is a well-known phenomenon that occurs, for example, when you don't move your hand for a while and your perception of the location of it thus 'drifts' over time.

Anyway, generally speaking this is a cool paper and I quite enjoyed reading it. That's my first of three posts this week - I'll have another one up in a day or two. I know this one was a bit long, and I'll try to make subsequent posts a bit shorter! Questions, comments etc. are very welcome, especially on topics like readability. I want this blog to look at the science in depth but also to be fairly accessible to the interested lay audience. That way I can improve my writing and communication skills while also keeping up with the literature. Win-win.

---

Burge, J., Girshick, A., & Banks, M. (2010). Visual-Haptic Adaptation Is Determined by Relative Reliability Journal of Neuroscience, 30 (22), 7714-7721 DOI: 10.1523/JNEUROSCI.6427-09.2010

Image copyright © 2010 by the Society for Neuroscience

Thursday 3 June 2010

Welcome

Hello all and sundry - probably not many people, but never mind.

This blog is my attempt to reengage with the scientific literature in my field (motor control and computational neuroscience) by making myself read at least 3 papers a week and write about them here - critically, if possible. Most of my explanations will probably be quite technical, though I will try to be as clear as I can for the benefit of any interested laypeople.

I haven't decided yet how long I'll aim for - maybe 500 words for each? Is that too long? Too short? I guess it depends on the paper. I mean, I've written papers myself that were originally 2500 words. So I'll play it by ear I think.

That's pretty much it. Look for about three updates a week - comments and questions very welcome!