Showing posts with label adaptation. Show all posts
Showing posts with label adaptation. Show all posts

Monday, 23 August 2010

Learning without thinking

ResearchBlogging.orgScratching around on the internet this afternoon on my first day back from holiday, I was kind of reluctant to dive straight back into taking papers apart. After all, I have spent the majority of the last three weeks drinking beer and eating pies in the UK, and the increase in my waistline has most likely been mirrored by the decrease in my critical faculties (as happens when you spend time away from the cutting edge). However, I ran across a really cool little article that reminded me just why I enjoy all this motor control stuff. So here goes nothing!

There’s been some work in recent years on the differences between implicit and explicit motor learning – that is, the kind of learning the brain does by itself, relying on cues from the environment, vs. using a well-defined strategy to perform a task. For example, learning to carry a full glass of water without spilling by just doing it and getting it wrong a lot until you implicitly work out how, or by explicitly telling yourself, “Ok, I’m going to try to keep the water as level as possible.” A fun little study on this was performed by Mazzoni and Krakauer (2006) in which they showed that giving their participants an explicit strategy in a visuomotor rotation task (reaching to a target where the reach is rotated) actually hurt their performance. Essentially they started off being able to perform the task well using the explicit strategy, which was something like ‘aim for the target to the left of the one you need to hit’. However as the task went on the implicit system doggedly learned it – and conflicted with the explicit strategy – so that the participants were making more errors at the end than at the beginning.

The paper I’m looking at today follows up on this result. Implicit error-based learning is thought to be the province of the cerebellum, the primitive, walnut-shaped bit at the back of the brain. The researchers hit upon the idea that if the cerebellum is important for implicit learning, then perhaps patients with cerebellar impairments would actually find it easier to perform the task relative to healthy control participants. To test this, they told both sets of participants to use an explicit strategy in a visuomotor rotation task, just like in the previous study, and measured their ‘drift’ from the ideal reaching movement.

Below you can see the results (Figure 2A in the paper):

Target error across movements

Open circles are all control participants, whereas filled circles are all patients. The black circles at the start show baseline performance – both groups performed pretty well and similarly. Red circles show the first couple of movements after the rotation was applied, and before participants were told to use the strategy. You can see that the participants are reaching completely the wrong way. The blue section shows reaching while using the strategy. Here’s the nice bit: the cerebellar patients are doing better than the controls, as their error is closer to zero, whereas the controls are steadily drifting away from the intended target. Magenta shows when the participants are asked to stop using the strategy and the final cyan markers show the ‘washout’ phase as both groups get back to baseline without an imposed rotation – though the patients manage much more quickly than the controls.

So it looks very much like the cerebellar patients, because their cerebellums are impaired at implicit learning, are able to perform this task better than healthy people. What’s kind of interesting is that other research has shown that cerebellar patients aren’t very good at forming explicit strategies on their own, which is something that healthy people do without even thinking about it. The tentative conclusion of the researchers is that it’s not so much that the implicit and explicit systems are completely separate, but that the implicit system can inform the development of explicit strategies – which is impaired if the cerebellum isn’t working properly.

I didn’t like everything in this paper. I was particularly frustrated with the method section, which talks about the kind of screen they used. I wasn’t sure whether the images shown to participants were on a screen in front of them or whether the screen was placed over the workspace in a virtual-reality setup; it was unclear. There was also a sentence claiming that the cerebellar patients’ performance was ‘less’ than the controls, when in fact it was better. Other than these minor niggles though, it’s a really nice paper showing a very cool effect.

--

Taylor JA, Klemfuss NM, & Ivry RB (2010). An Explicit Strategy Prevails When the Cerebellum Fails to Compute Movement Errors Cerebellum PMID: 20697860

Images copyright © 2010 Taylor, Klemfuss & Ivry

Tuesday, 27 July 2010

The noisy brain

ResearchBlogging.orgNoise is a funny word. When we think of it in the context of everyday life, we tend to focus on distracting background sounds. Distracting from what? Usually whatever we’re doing at the time, whether it’s having a conversation or watching TV. In most cases, what we’re trying to do is interpret some signal – like speech – that’s corrupted by background noise. Neurons in the brain have also often been thought of as sending signals corrupted by noise, which seems to make intuitive sense. But that’s not quite the whole story.

The very basics: neurons ‘fire’ and send signals to one another in the form of action potentials, which can be recorded as ‘spikes’ in their voltage. So when a neuron fires, we call that a spike. The spiking activity of neurons has an inherent variability, i.e. neurons won’t always fire in the same situations each time, probably due to confounding inputs from metabolic and external inputs (like sensory information and movement). In other words, the signal is transmitted with some background ‘noise’. What’s kind of interesting about this paper (and others) is that variability in the neural system is starting to be thought of as part of the signal itself, rather than an inherently corrupting influence on it.

Today we delve back into the depths of neural recording with a study that investigates trial-to-trial variability during motor learning. That is: how does the variability of neurons change as learning progresses, and what can this tell us about the neural mechanisms? This paper gets a bit technical, so hang on to your hats.

One important measure used in the paper is something called the Fano Factor. The variability in neuronal spiking is dependent on the underlying spiking rate, i.e. as the amount of spiking increases, so does the variability; this is known as signal-dependent noise. This effect means that we can’t just look at the variability in the spiking activity – we actually have to modify it based on the average spiking activity. The Fano Factor (FF) does precisely this (you can check it out at the Wiki link above if you like). It’s basically just another way of saying ‘variability’ – I mention it only because it’s necessary to understand the results of the experiment!

Ok, enough rambling. What did the researchers do? They trained a couple of monkeys on a reaching task where they had to learn a 90° visual rotation, i.e. they had to learn to reach to the right to hit a target in front of them. While learning, their brain activity was recorded and the variability was analysed in two time periods: before the movement, termed ‘preparatory activity’ and during the movement onset, termed ‘movement-related activity’. Neurons were recorded from the primary motor cortex, which is responsible for sending motor commands to the muscles, and the supplementary motor area, which is a pre-motor area. In the figure below, you can see some results from motor cortex (Figure 2 A-C in the paper):

Neural variability and error over time

Panel B shows the learning rate of monkeys W (black) and X (grey) – as the task goes on, the error decreases, as expected. Note that monkey W is a faster learner than monkey X. Now look at panel A. You can see that in the preparatory time period (left) variability increases as the errors reduce for each monkey – it happens first in monkey W and then in monkey X. In the movement-related time period (right) there’s no increase in variability. Panel C just shows the overall difference in variability in motor cortex on the opposite (contralateral) side vs. the same (ipsilateral) side: the limb is controlled by the contralateral side, so it’s unsurprising that there’s more variability over there.

Another question the researchers asked was in which kinds of cells was the variability greatest? In primary motor cortex, cells tend to have a preferred direction – i.e. they will fire more when the monkey reaches to a target in that direction than in other directions. The figure below (Figure 5 in the paper) shows the results:

Variability with neural tuning

For both monkeys, it was only the directionally tuned cells that showed the increase in variability (panel A). You can see this even more clearly in panel B, where they aligned the monkeys’ learning phases to look at all the cells together. So it seems that it is primarily the cells that fire more in a particular direction that show the learning-related increase in variability. And panel C shows that it’s cells that have a preferred direction closest to the required movement direction that show the modulation.

(It’s worth noting that on the right of panels B and C is the spike count – the tuned cells have a higher spike count than the untuned cells, but the researchers show in further analyses that this isn’t the reason for the increased variability.)

I’ve only talked about primary motor cortex so far: what about the supplementary motor area? Briefly, the researchers found similar changes in variability, but even earlier in learning. In fact the supplementary motor area cells started showing the effect almost at the very beginning of learning.

Phew. What does this all mean? Well: the fact that there’s increased variability only in the pre-movement states, and only in the directionally tuned cells, suggests a ‘searching’ hypothesis – the system may be looking for the best possible network state before the movement, but only in the direction that’s important for the movement. So it appears to be a very local process that’s confined to cells interested in the direction the monkey has to move to complete the task. And further, this variability appears earlier in the supplementary motor area – consistent with the idea that this area precedes the motor cortex when it comes to changing its activity through learning.

This is really cool stuff. We’re starting to get an idea of how the inherent variability in the brain might actually be useful for learning rather than something that just gets in the way. The idea isn’t too much of a surprise to me; I suggest Read Montague’s excellent book for a primer on why the slow, noisy, imprecise brain is (paradoxically) very good at processing information.

--

Mandelblat-Cerf, Y., Paz, R., & Vaadia, E. (2009). Trial-to-Trial Variability of Single Cells in Motor Cortices Is Dynamically Modified during Visuomotor Adaptation Journal of Neuroscience, 29 (48), 15053-15062 DOI: 10.1523/JNEUROSCI.3011-09.2009

Images copyright © 2009 Society for Neuroscience

Thursday, 8 July 2010

Motor learning changes where you think you are

ResearchBlogging.orgI’ve covered both sensory and motor learning topics on this blog so far, and here’s one that very much mashes the two together. In earlier posts I have written about how we form a percept of the world around us, and about our sense of ownership of our limbs. In today’s paper the authors investigate the effect of learning a motor task on sensory perception itself.

They performed a couple of experiments, in slightly different ways, which essentially showed the same result – so I’ll just talk about the first one here. Participants had to make point-to-point reaches while holding a robotic device in three phases (null, force field and aftereffect) separated by perceptual tests designed to assess where they felt their arm to be. The figure below (Figure 1A in the paper) shows the protocol and the reaching error results:

Motor learning across trials

In the null phase, as usual, participants reached without being exposed to a perturbation. In the force field phase, the robot pushed their arm to the right or to the left (blue or red dots respectively), and you can see from the graph that they made highly curved movements to begin with and then learnt to correct them. In the aftereffect phase, the force was removed, but you can still see the motor aftereffects from the graph. So motor learning definitely took place.

But what about the perceptual tests? It turns out that participants’ estimation of where their arm was changed after learning the motor task. In the figure below (Figure 2B and 2C in the paper) you can see in the left graph that after the force field (FF) trials, hand perception shifted in the opposite direction to the force direction. [EDIT: actually it's in the same direction; see the comments section!] This effect persisted even after the aftereffects (AE) block.


Perceptual shifts as learning occurs

What I think is even more interesting is the graph on the right. It shows not only the right and left (blue and red) hand perceptions, but also the hand perception after 24 hours (yellow) – and, crucially, the hand perception when participants didn’t make the movements themselves but allowed the robot to move them (grey). As you can see, there’s no perceptual shift. It only appears to happen when participants make active movements through the force field, which means that the change in sensory perception is closely linked to learning a motor task.

In some ways this isn’t too surprising, to me at least. In some of my work with Adrian Haith (happily cited by the authors!), we developed and tested a model of motor learning that requires changes to both sensory and motor systems, and showed that force field learning causes perceptual shifts in locating both visual and proprioceptive targets; you can read it free online here. The work in this paper seems to shore up our thesis that the motor system takes into account both motor and sensory errors during learning.

Some of the work I’m dabbling with at the moment involves neuronal network models of motor learning and optimization. This kind of paper, showing the need for changes in sensory perception during motor learning, throws a bit of a cog into the wheels of some of that. As it stands the models tend to assume sensory input as static and merely change motor output as learning progresses. Perhaps we need to think a bit more carefully about that.

---

Ostry DJ, Darainy M, Mattar AA, Wong J, & Gribble PL (2010). Somatosensory plasticity and motor learning. The Journal of Neuroscience, 30 (15), 5384-93 PMID: 20392960

Images copyright © 2010 Ostry, Darainy, Mattar, Wong & Gribble

Wednesday, 23 June 2010

The cost of uncertainty

ResearchBlogging.orgBack from my girlfriend-induced hiatus and onto a really interesting paper published ahead of print in the Journal of Neurophysiology. This work asks some questions, and postulates some answers, very similar to the line of thinking I’ve been going down recently – which is, of course, the main reason I find it interesting! (The other reason is that they used parabolic flights. Very cool.)

One theory of how the brain performs complex movements in a dynamical environment – like, say, lifting objects – is known as optimal feedback control (OFC). The basic idea is that the brain makes movements that are optimized to the task constraints. For example, to lift an object, the control system might want to minimize the amount of energy used* and at the same time lift the object to a particular position. In OFC we combine these constraints into something called a cost function: how much the action ‘costs’ the system to perform. To optimize the movement, the system simply works to reduce the total cost.

But where does the system get information about the limb and the task from in the first place so as to optimize its control? There are two sources for knowledge about limb dynamics. The most obvious is reactive: feedback from the senses, from both vision and proprioception (the sense of where the arm is in space). But feedback takes a while to travel to the brain and so another source is needed: a predictive source of knowledge, an internal model of the task and limb dynamics. The predictive and reactive components can be combined in an optimal fashion to form an estimate of the state of the limb (i.e. where it is and how fast it’s going). This ‘state estimate’ can then be used to calculate the overall cost of the movement.

In today’s paper the authors argue that at the start of a new task, a new internal model has to be learnt, or an old one modified, to deal with the new task demands. So far so uncontroversial. What’s new here is the claim that the cost function being optimized for actually changes when dealing with a new task – because there is higher uncertainty in the internal prediction so the system is temporarily more reliant on feedback. They have some nice data and models to back up their conclusion.

The task was simple: participants had to grip a block and move it up or down from a central position while their position and grip force was recorded. After they’d learnt the task at normal gravity, they had to perform it in microgravity during a parabolic flight, which essentially made their arm and the object weightless. Their grip force increased markedly even though they now had a weightless object, and kinematic (e.g. position, velocity) measures changed too; movements took more time, and the peak acceleration was lower. Over the course of several trials the grip force decreased again as participants learnt the task. You can see some representative kinematic data in the figure below (Figure 4 in the paper):

Kinematic data from a single participant


Panels A-D show the average movement trace of one participant in normal (1 g) and microgravity (0 g) conditions, while panels E and F show the changes in acceleration and movement time respectively. The authors argue that the grip force changes at the beginning of the first few trials point towards uncertainty in the internal prediction, which results in the altered kinematics.

To test this idea, they ran a simulation based on a single-joint model of the limb using OFC and the optimal combination of information from the predictive system and sensory feedback. What they varied in this model was the noise, and thus the reliability, in the predictive system. The idea was that as the prediction became less reliable, the kinematics should change to reflect more dependence on the sensory feedback. But that's not quite what happened, as you can see from the figure below (Figure 8 in the paper):

Data and simulation results


Here the graphs show various kinematic parameters. In black and grey are the mean data points from all the participants for the upward and downward movements. The red squares show the parameters the simulation came up with when noise was injected into the prediction. As you can see, they're pretty far off! So what was the problem? Well, it seems that you need to change not only the uncertainty of the prediction but also the cost function that is being optimized. The blue diamonds show what happens when you manipulate the cost function (by increasing the parameter shown as alpha); suddenly the kinematics are much closer to the way people actually perform.

Thus, the conclusion is that when you have uncertainty in your predictive system, you actually change your cost function while you're learning a new internal model. I find this really interesting because it's a good piece of evidence that uncertainty in the predictive system feeds into the selection of a new cost function for a movement, rather than the motor system just sticking with the old cost function and continuing to bash away.

It's a nice paper but I do wonder, why did the authors go to all the trouble of using parabolic flights to get the data here? If what they're saying is true and any uncertainty in the internal model/predictive system is enough to make you change your cost function, this experiment could have been done much more simply – and for much longer than the 30 trials they were able to do under microgravity – by just using a robotic system. Perhaps they didn't have access to one, but even so it seems a bit of overkill to spend money on parabolic flights which are so limited in duration.

Overall though it's a really fun paper with some interesting and thought-provoking conclusions.

*To be precise there is some evidence that it's not the amount of energy used that gets minimized, but the size of the motor command itself (because a bigger command has more variability due to something called signal-dependent noise... I'm not going to go into that though!).

---

Crevecoeur, F., McIntyre, J., Thonnard, J., & Lefevre, P. (2010). Movement Stability under Uncertain Internal Models of Dynamics Journal of Neurophysiology DOI: 10.1152/jn.00315.2010

Images copyright © 2010 The American Physiological Society

Tuesday, 8 June 2010

Mood, music and movement

ResearchBlogging.orgWe all know that music can have an effect on our mood (or, to use a mildly annoying linguistic contrivance, can affect our affect). And being in a better mood has been consistently shown to improve our performance on cognitive tasks, like verbal reasoning; the influence of serene music on such tasks is also known as the 'Mozart effect'. What's kind of interesting is that this Mozart effect has also been shown to be effective on motor tasks, like complex manual tracking.

In the last post I talked a bit about motor adaptation - recalibrating two sensory sources so that the overall percept matches up with the incoming information. Say you're reaching to a target under distorted vision, like wearing goggles with prisms in them that make it look like you're reaching further to the right than you actually are; this is known as a visual perturbation. When you reach forward, the sense of where you are in space (proprioception) sends signals to the brain telling you your arm's gone forward. However, the visual information you receive tells you you've gone right. Some recalibration is in order, and over the course of many reaches you gradually adapt your movements to match the two percepts up.

There are a couple of stages in motor adaptation. The first stage is very cognitive, when you realise something's wrong and you rapidly change your reaches to reduce the perceived error in your movement. The second stage is much less consciously directed, and involves learning to control your arm with the new signals you are receiving from vision and proprioception. When the prism goggles are removed, you experience what is known as a motor aftereffect: you will now be reaching leftwards, the opposite of what appeared to happen when you were originally given the prisms. Over the course of a few trials this aftereffect will decay as the brain shifts back to the old relationship between vision and proprioception.

All this is very interesting (to me at least!) but what does it have to do with music? Well, today's paper by Otmar Bock looks more closely at how the the Mozart effect affects motor systems by studying the influence of music on motor adaptation. The theory goes that if an increased mood can improve cognitive performance, then the first phase of motor adaptation should be facilitated. However, since motor aftereffects are not a conscious cognitive strategy but an unconscious motor recalibration they should not be affected by the change in mood.

To test this idea, Bock split the participants into three groups and played each group either serene, neutral* or sad music at the beginning of and throughout the experiment. Before listening to the music, after listening for a while and at the end of the study, participants indicated their mood by marking a sheet of paper. While listening to the music, the performed a motor adaptation task: they had to move a cursor to an on-screen target while the visual feedback of the cursor was rotated by 60º. They couldn't see their hand while they did this, so their visual and proprioceptive signals gave different information.

As expected, the music participants listened to affected their mood: the 'sad' group reported a lower emotional valence, i.e. more negative emotions, then the 'neutral' group, which reported a lower emotional valence than the 'serene' group. During the task, as generally happens during these adaptation tasks where the goal is visual (and of course vision is more reliable!), participants adapted their movements so as to reduce the visual error. The figure below (Figure 2 in the paper) shows this process for the three separate groups, where light grey shows the 'serene' group, mid grey shows the 'neutral' group and dark grey shows the 'sad' group:


Adaptation error by group

The first three episodes in the figure show the reaching error during normal unrotated trials (the baseline phase), then from episode 4 onwards the cursor is rotated, sending the error up high (the adaptation phase). The error then decreases for all three groups until episode 29, where the rotation is removed again - and now the error is reversed as participants reach the wrong way (the aftereffect phase). What's cool about this figure is that it shows no differences at all for the 'neutral' and 'sad' groups but there is an obvious difference in the 'serene' group: adaptation is faster for this group than the others. Also, when the rotation is removed, the aftereffects show no differences between the three groups.

So it does seem that being in a state of high emotional valence (a good mood) can improve performance on the cognitive stage of motor adaptation - and it seems that 'serene' music can get you there. And interestingly, mood appears to have no effect on the less cognitive aftereffect stage (though see below for my comments on this).

The two main, connected questions I have about these results from a neuroscience point of view are: 1. how does music affect mood? and 2. how does mood affect cognitive performance? A discussion of how music affects the brain is beyond the scope of this post (and my current understanding) but since the brain is a collection of neurons firing together in synchronous patterns it makes sense that this firing can be regulated by coordinated sensory input like music. Perhaps serene music makes the patterns fire more efficiently, and sad music depresses the coordination somewhat. I'm not sure, but if the answer is something like this then I'd like to know more.

There are still a couple of issues with the study though. Here are the data on emotional valence (Figure 1A in the paper):


Emotional valence by group at three different stages

What you can see here is that the emotional valence was the same before (baseline) and after (final) the study, and it's only after listening to the music for a while (initial) that the changes in mood are apparent. Does this mean then that as participants continued with the task that their mood levelled out, perhaps as they concentrated on the task more, regardless of the background music? Could this be the reason for the lack of difference in the aftereffect phase? After all, when a perturbation is removed participants will quickly notice something has changed and I would have thought that the cognitive processes would swing into gear again, like in the beginning of the adaptation phase.

Also, it's worth noting from the above figure that valence is not actually improved by serene music, but appears to decrease for neutral and sad music. So perhaps it is not that serene music makes us better at adapting, but that neutral/sad music makes us worse? There are more questions than answers in these data I feel.

Hmm. This was meant to be a shorter post than the previous one, but I'm not sure it is! Need to work on being concise, I feel...

*I'm not exactly sure what the neutral sound effect was as there's no link, but Bock states in the paper that it is "movie trailer sound 'coffeeshop' from the digital collection Designer Sound FX®"

---

Bock, O. (2010). Sensorimotor adaptation is influenced by background music Experimental Brain Research, 203 (4), 737-741 DOI: 10.1007/s00221-010-2289-0

Images copyright © 2010 Springer-Verlag