Tuesday 27 July 2010

The noisy brain

ResearchBlogging.orgNoise is a funny word. When we think of it in the context of everyday life, we tend to focus on distracting background sounds. Distracting from what? Usually whatever we’re doing at the time, whether it’s having a conversation or watching TV. In most cases, what we’re trying to do is interpret some signal – like speech – that’s corrupted by background noise. Neurons in the brain have also often been thought of as sending signals corrupted by noise, which seems to make intuitive sense. But that’s not quite the whole story.

The very basics: neurons ‘fire’ and send signals to one another in the form of action potentials, which can be recorded as ‘spikes’ in their voltage. So when a neuron fires, we call that a spike. The spiking activity of neurons has an inherent variability, i.e. neurons won’t always fire in the same situations each time, probably due to confounding inputs from metabolic and external inputs (like sensory information and movement). In other words, the signal is transmitted with some background ‘noise’. What’s kind of interesting about this paper (and others) is that variability in the neural system is starting to be thought of as part of the signal itself, rather than an inherently corrupting influence on it.

Today we delve back into the depths of neural recording with a study that investigates trial-to-trial variability during motor learning. That is: how does the variability of neurons change as learning progresses, and what can this tell us about the neural mechanisms? This paper gets a bit technical, so hang on to your hats.

One important measure used in the paper is something called the Fano Factor. The variability in neuronal spiking is dependent on the underlying spiking rate, i.e. as the amount of spiking increases, so does the variability; this is known as signal-dependent noise. This effect means that we can’t just look at the variability in the spiking activity – we actually have to modify it based on the average spiking activity. The Fano Factor (FF) does precisely this (you can check it out at the Wiki link above if you like). It’s basically just another way of saying ‘variability’ – I mention it only because it’s necessary to understand the results of the experiment!

Ok, enough rambling. What did the researchers do? They trained a couple of monkeys on a reaching task where they had to learn a 90° visual rotation, i.e. they had to learn to reach to the right to hit a target in front of them. While learning, their brain activity was recorded and the variability was analysed in two time periods: before the movement, termed ‘preparatory activity’ and during the movement onset, termed ‘movement-related activity’. Neurons were recorded from the primary motor cortex, which is responsible for sending motor commands to the muscles, and the supplementary motor area, which is a pre-motor area. In the figure below, you can see some results from motor cortex (Figure 2 A-C in the paper):

Neural variability and error over time

Panel B shows the learning rate of monkeys W (black) and X (grey) – as the task goes on, the error decreases, as expected. Note that monkey W is a faster learner than monkey X. Now look at panel A. You can see that in the preparatory time period (left) variability increases as the errors reduce for each monkey – it happens first in monkey W and then in monkey X. In the movement-related time period (right) there’s no increase in variability. Panel C just shows the overall difference in variability in motor cortex on the opposite (contralateral) side vs. the same (ipsilateral) side: the limb is controlled by the contralateral side, so it’s unsurprising that there’s more variability over there.

Another question the researchers asked was in which kinds of cells was the variability greatest? In primary motor cortex, cells tend to have a preferred direction – i.e. they will fire more when the monkey reaches to a target in that direction than in other directions. The figure below (Figure 5 in the paper) shows the results:

Variability with neural tuning

For both monkeys, it was only the directionally tuned cells that showed the increase in variability (panel A). You can see this even more clearly in panel B, where they aligned the monkeys’ learning phases to look at all the cells together. So it seems that it is primarily the cells that fire more in a particular direction that show the learning-related increase in variability. And panel C shows that it’s cells that have a preferred direction closest to the required movement direction that show the modulation.

(It’s worth noting that on the right of panels B and C is the spike count – the tuned cells have a higher spike count than the untuned cells, but the researchers show in further analyses that this isn’t the reason for the increased variability.)

I’ve only talked about primary motor cortex so far: what about the supplementary motor area? Briefly, the researchers found similar changes in variability, but even earlier in learning. In fact the supplementary motor area cells started showing the effect almost at the very beginning of learning.

Phew. What does this all mean? Well: the fact that there’s increased variability only in the pre-movement states, and only in the directionally tuned cells, suggests a ‘searching’ hypothesis – the system may be looking for the best possible network state before the movement, but only in the direction that’s important for the movement. So it appears to be a very local process that’s confined to cells interested in the direction the monkey has to move to complete the task. And further, this variability appears earlier in the supplementary motor area – consistent with the idea that this area precedes the motor cortex when it comes to changing its activity through learning.

This is really cool stuff. We’re starting to get an idea of how the inherent variability in the brain might actually be useful for learning rather than something that just gets in the way. The idea isn’t too much of a surprise to me; I suggest Read Montague’s excellent book for a primer on why the slow, noisy, imprecise brain is (paradoxically) very good at processing information.

--

Mandelblat-Cerf, Y., Paz, R., & Vaadia, E. (2009). Trial-to-Trial Variability of Single Cells in Motor Cortices Is Dynamically Modified during Visuomotor Adaptation Journal of Neuroscience, 29 (48), 15053-15062 DOI: 10.1523/JNEUROSCI.3011-09.2009

Images copyright © 2009 Society for Neuroscience

Wednesday 21 July 2010

Lazy beats sloppy

ResearchBlogging.orgToday I give in to my inner lazy person (who is, in fact, quite similar to my outer lazy person) and talk about a paper after I’ve just been to a journal club, rather than before. The advantages are that I was reading the paper anyway and I’ve just had an hour of discussion about it so I don’t actually have to think of things to say about it myself. The disadvantages are that, um, it’s lazy? And that’s bad? Perhaps. But I still think it’s better, as we shall see, than sloppy.

The premise of the paper harks back to my earlier post on visual dominance and multisensory integration. It’s been well known in the literature for a while that if you flash a couple of lights while at the same time playing auditory beeps, an interesting little illusion occurs. If participants are asked to count the number of flashes, and they’re the same as the number of beeps, then they almost always get the answer right. But if there are two flashes and one beep, or one flash and two beeps, then they’re much more likely to say there was one or two flashes respectively. The figure below (Figure 1 in the paper) illustrates this:

Illusion when the hand is at rest

In the figure, you can see that the bar for one beep and one flash (far left black bar) and two beeps and two flashes (far right white bar) are at heights 1 and 2 respectively, which illustrates the number of perceived flashes. That is, the number of perceived flashes is just what you’d expect – one for one flash, two for two flashes. However the middle bars, which show the one beep/two flash and two beep/one flash conditions, are at intermediate heights, showing the presence of the illusion. This figure actually demonstrates the first problem with the paper, which is that the figures are pretty difficult to interpret. I know I wasn’t alone in the lab at finding them confusing anyway.

What the authors were interested in is whether a goal-directed movement could alter visual processing, and they used the illusion to probe this. Participants had to make point-to-point reaches from a start point to a target. During the reach their susceptibility to the illusion was tested at the target point – but the test began a variable time away from the start of the movement, between 0 and 250 ms. That is: sometimes the flashes and beeps occurred at the start of the movement when the arm was moving slowly, sometimes when it was half way through and thus moving faster, and sometimes at the end when it was moving slowly again.

The experimenters found that, when there were two flashes and one beep, participants were less likely to see an illusion during the middle part of their movement than during the beginning and end. That is, they were more likely to get it right when they were moving faster. The trouble starts when you look a bit closer at the effect they’ve got – it’s pretty weak. There seems to be a lot of noise in the data, and the impression that they’re grasping at straws a little isn’t helped by the aforementioned sloppy figures.

Having said that, the stats do hold up. What might be the explanation for this kind of effect? The multisensory integration argument is that the sensory modality (e.g. vision) with the least noise should be the one that is prioritized. So when the arm is moving quickly, there’s more noise in the motor system compared with the visual system and thus you’re better at determining how many flashes there are. I’m not sure I buy this; the illusion is about the visual and auditory systems, after all. I’m not sure I get why you’d be better at detecting an illusion when you’re moving than when you’re not moving, for example. The authors claim that the limb movement “requires extensive use of visual information” but again I’m not so sure. When we reach for objects we generally take note of where our arm is, look at the object and then move the arm to the object without looking at it again.

So, a weak effect that isn’t well explained. That wouldn’t be so bad, but the clarity of the paper is also lacking. There’s also the question of why, if they had such a weak effect, they didn’t do another experiment or two to tease out what was really going on. I do think the slightly larger problem here is the review process at PLoS. It’s open access so anyone can read it free online, which I am very much in favour of, but it’s biased towards only reviewing the methods and results of a paper rather than the introduction/discussion. I go back and forth over whether this is a good thing. Some journals reject papers based on novelty (a.k.a. coolness) whereas it appears that PLoS strives to accept well-performed science regardless of how ‘interesting’ (and I use the term in quotes advisedly) the result is.

In this case I think that, while the science is good, it would be a much better paper if it went a bit more into depth with a couple of extra experiments exploring these effects more carefully – and if it had figures that were perhaps a bit easier to comprehend.

--

Tremblay L, & Nguyen T (2010). Real-time decreased sensitivity to an audio-visual illusion during goal-directed reaching. PloS one, 5 (1) PMID: 20126451

Image copyright © 2010 Tremblay & Nguyen

The name defines the thing

So, astute readers may notice a name change here - I've decided to go back to my old WordPress blog title (which never had more than five posts over its year-long lifespan, a perfect example of my habit of enthusiastically starting projects and never following through). I used to own the domain motorchauvinist.com but no longer. Oh well. Blogger will do for the moment.

Why motor chauvinism? I'd like to disassociate myself from the idea that I am in any way interested a) in cars and b) in denigrating women! I first came across the term in a paper written in 2001:
"From the motor chauvinist's point of view the entire purpose of the human brain is to produce movement." -- Wolpert, Ghahramani & Flanagan (2001)
The authors go on to explain how movement underlies everything we do, our every interaction with the world, all communication (speech, sign language, gestures and writing), and so on and so forth. While I rather like the idea I want to make clear that this blog isn't going to specifically advocate for the notion. Rather, I just thought that since it's pretty specifically about movement neuroscience and not just about reading random papers it might be fun to redefine it a little more sharply.

I hope to have some guest posters who will be able to talk more about things that I don't know much about, but that's a plan for later. Right now, welcome again to my blog, which will be doing the same kinds of things it has been doing for a couple of months, just under a different name.

Oh - I apologise for those who have linked to the blog under the previous name, as those links are now unlikely to work. That's why I've changed the name now rather than after a couple more months. Assuming, as I note above, that I stick with it...

The paper that the quote is from is very good, by the way, and you should definitely read it if you can. It also contains Calvin & Hobbes cartoons. What's not to like?

--

Wolpert DM, Ghahramani Z, & Flanagan JR (2001). Perspectives and problems in motor learning. Trends in cognitive sciences, 5 (11), 487-494 PMID: 11684481

Monday 19 July 2010

Far out is not as far out as you think

ResearchBlogging.orgProprioception is the sense of where your body is in space. It is one of several ways the brain uses sensory information to figure out where your limbs and the rest of you are, along with vision and the semicircular ear canals of the vestibular system (though these are more important in balance). Proprioception is defined as information from the lengths of muscles, the location of joints and receptors in the skin that tell us how much we have stretched it.

How, if at all, does the accuracy and precision of this information vary across different tasks and limb configurations? To test this, the authors of today’s study got their participants to perform three experimental tasks that involved matching perceived limb position without being able to see their arm. In the first task, participants used a joystick to rotate a virtual line on a screen positioned over their limb until they decided that it was in the same direction as their forearm. In the second task, they used a joystick to move a dot around until they decided that it was over their index finger. In the third task, they again saw a virtual line on the screen, but this time they had to actively move their forearm until they decided they were in line with it.

The results were kind of interesting: in all three cases, participants tended to overestimate the position of their limbs when they were at extremes; i.e. when they were more flexed they assumed they were even more flexed, and when they were more extended they assumed they were even more extended. This is quite confusing to explain, but the figure below (Figure 4A in the paper) should help:

Estimates of arm position from one participant

The black lines are the actual position of the arm of a representative participant in task 1, with flexion on the left and extension on the right. Blue lines are the participant’s estimates of arm position, and the red line is the average of the estimates. You can see that when the arm is flexed the participant guesses that it’s more flexed than it actually is, with the corresponding result for when the arm is extended. The researchers found no differences in accuracy between the three tasks, but they did find differences in precision – participants were much more precise, i.e. the spread of their responses was lower, in the passive fingertip task and the active elbow movement task (tasks 2 and 3).

So what? Well, these results give us an insight into how proprioception works. The authors argue that the bias towards thinking you’re more flexed/extended than you really are comes from the overactivity of joint and skin receptors as the limb reaches its extreme positions. Why might these receptors become overactive at extreme positions? Possibly because it allows us to sense ahead of time when we’re getting to a point of movement that is mechanically impossible for the limb to perform, either because we’re trying to flex it too much or we’re trying to straighten it too much. Push too hard at either extreme – muscles are quite strong – and you could damage the limb. Better for the system to make you stop pushing earlier by giving you a signal that you’re further along than you thought. I think it’s a nice hypothesis.

I quite like this study, as it’s another one of those not-wildly-exciting-but-useful-to-know kinds of papers. While the wildly exciting stuff is great, I think that too often the worthy, low-key stuff like this is unfairly overshadowed. Science is about huge leaps and paradigm shifts much less than it’s about the slow grind of data making possible incremental progress on various questions. And I’m not just saying that because that’s what all my papers are like!

---

Fuentes, C., & Bastian, A. (2009). Where Is Your Arm? Variations in Proprioception Across Space and Tasks Journal of Neurophysiology, 103 (1), 164-171 DOI: 10.1152/jn.00494.2009

Image copyright © 2010 The American Physiological Society

Monday 12 July 2010

It's better to keep what works than to try something new

ResearchBlogging.orgIt seems I just can’t leave this topic alone. Last week I blogged about a paper on use-dependent learning, which discussed how it’s not only the errors you make that contribute to your learning of a motor task, but that your movements become more similar to movements you’ve already made. Today’s paper deals with something similar, but from a different perspective: that of optimal feedback control.

I discussed OFC in another previous post, but a quick recap of the theory is that to make a movement the brain needs to optimize the motor commands it sends out to control both effort (or noise in the system) and error (i.e. how far off the target you are). So an optimal solution to reaching for a pint in the pub should involve the minimization of both error and effort to acquire the target in a timely manner.

In the study I’ll discuss today, the authors make the claim that if this optimization happens at all it is local, not global. That is, people tend not to optimize to find the best possible solution, but rather they optimize until they find one that works well enough and then stick to it – even when there’s a better solution overall. To investigate this, the experimenters attached participants to a robotic wrist device that pushed their wrist back and forth at a certain frequency. Participants saw a visual target on the screen and a cursor representing their wrist amplitude; they had to keep the amplitude below a certain level to keep the cursor in the target.

The task was rather cunningly set up so that the participants could perform it in one of two ways: either by co-contracting their wrist muscles strongly against the perturbation, or by relaxing the muscles, which obviously requires less effort. (For an analogy, imagine riding a bike down a cobbled hill; you can either make the handlebars really stiff or relax and let the jolting push you around a bit, but if you do something in the middle the jolting will make you fall over.) Participants were either given ‘free’ trials where they could choose which strategy to use, or ‘forced’ trials where they were pushed into a certain strategy at the start of the task by visual feedback.

After being given three ‘free’ trials they were then given three ‘forced’ trials in the strategy they didn’t pursue the first time, so if they had freely chosen the ‘relaxed’ strategy, they were pushed into the ‘co-contract’ strategy. Then they were given three more ‘free’ trials and then three more ‘forced’ trials in the other strategy, and finally three more ‘free’ trials. You can see a representative participant in the figure below (part of Figure 2A in the paper):


Co-activation in one representative participant across time

Here the dark areas are areas of low movement amplitude at certain levels of maximum voluntary co-activation – i.e. they’re the areas you want to stay in to perform the task correctly. If you co-contract too much or too little, you’ll end up in the white area in the middle and you’ll fail the task. The traces show the five sets of trials: the first ‘free’ set is white, then the first ‘forced’ set is blue, then the next ‘free’ set is green, then the next ‘forced’ set is yellow, and the final ‘free’ set is red. What you can see clearly from this graph is that participants tended to follow in the ‘free’ trials where they’d been pushed in the previous set of ‘forced’ trials, regardless of whether it was actually a lower-effort solution. That is, subjects tended to do what they’d done before, whether or not it was a better solution.

Sound familiar? Like in use-dependent learning, participants tended to do things they’d already done rather than make a new solution. And again, it makes sense to me that this would happen. The authors in this paper argue that the brain is forming ‘motor memories’ that are also used in the optimization process, and that the optimization itself is thus local and not global. I guess I can buy that, but only in the sense that these ‘motor memories’ are patterns of activation that have been learnt by the network. It takes metabolic energy to create new connections and learn a new pattern, so any optimization process would have to take this into account along with error and effort.

It might even explain the existence of straight line movements in non-optimal situations; if you’ve moved in straight lines all your life because it’s an efficient and effective way to move, then if you’re suddenly placed in an environment where moving in a straight line is more effortful and therefore non-optimal, it’s going to be very difficult to unlearn that deep network optimization you’ve been creating your whole life.

There’s more to the paper too, I think it’s great.

---

Ganesh, G., Haruno, M., Kawato, M., & Burdet, E. (2010). Motor memory and local minimization of error and effort, not global optimization, determine motor behavior Journal of Neurophysiology DOI: 10.1152/jn.01058.2009

Image copyright © 2010 The American Physiological Society

Thursday 8 July 2010

Motor learning changes where you think you are

ResearchBlogging.orgI’ve covered both sensory and motor learning topics on this blog so far, and here’s one that very much mashes the two together. In earlier posts I have written about how we form a percept of the world around us, and about our sense of ownership of our limbs. In today’s paper the authors investigate the effect of learning a motor task on sensory perception itself.

They performed a couple of experiments, in slightly different ways, which essentially showed the same result – so I’ll just talk about the first one here. Participants had to make point-to-point reaches while holding a robotic device in three phases (null, force field and aftereffect) separated by perceptual tests designed to assess where they felt their arm to be. The figure below (Figure 1A in the paper) shows the protocol and the reaching error results:

Motor learning across trials

In the null phase, as usual, participants reached without being exposed to a perturbation. In the force field phase, the robot pushed their arm to the right or to the left (blue or red dots respectively), and you can see from the graph that they made highly curved movements to begin with and then learnt to correct them. In the aftereffect phase, the force was removed, but you can still see the motor aftereffects from the graph. So motor learning definitely took place.

But what about the perceptual tests? It turns out that participants’ estimation of where their arm was changed after learning the motor task. In the figure below (Figure 2B and 2C in the paper) you can see in the left graph that after the force field (FF) trials, hand perception shifted in the opposite direction to the force direction. [EDIT: actually it's in the same direction; see the comments section!] This effect persisted even after the aftereffects (AE) block.


Perceptual shifts as learning occurs

What I think is even more interesting is the graph on the right. It shows not only the right and left (blue and red) hand perceptions, but also the hand perception after 24 hours (yellow) – and, crucially, the hand perception when participants didn’t make the movements themselves but allowed the robot to move them (grey). As you can see, there’s no perceptual shift. It only appears to happen when participants make active movements through the force field, which means that the change in sensory perception is closely linked to learning a motor task.

In some ways this isn’t too surprising, to me at least. In some of my work with Adrian Haith (happily cited by the authors!), we developed and tested a model of motor learning that requires changes to both sensory and motor systems, and showed that force field learning causes perceptual shifts in locating both visual and proprioceptive targets; you can read it free online here. The work in this paper seems to shore up our thesis that the motor system takes into account both motor and sensory errors during learning.

Some of the work I’m dabbling with at the moment involves neuronal network models of motor learning and optimization. This kind of paper, showing the need for changes in sensory perception during motor learning, throws a bit of a cog into the wheels of some of that. As it stands the models tend to assume sensory input as static and merely change motor output as learning progresses. Perhaps we need to think a bit more carefully about that.

---

Ostry DJ, Darainy M, Mattar AA, Wong J, & Gribble PL (2010). Somatosensory plasticity and motor learning. The Journal of Neuroscience, 30 (15), 5384-93 PMID: 20392960

Images copyright © 2010 Ostry, Darainy, Mattar, Wong & Gribble

Monday 5 July 2010

Baby (not quite) steps

ResearchBlogging.orgMany non-scientists misunderstand the basic way science works. While there are indeed huge discoveries that fundamentally change the way we think about things, the vast majority of the time published papers are a steady plod onwards, adding in very modest amounts to the staggering array of human knowledge. Often seismic shifts in scientific opinion don’t come from great discoveries but from many scientists reading the literature and arguing among themselves and coming to different conclusions from the slow-burn of new thoughts and experiments. Such is the case with this paper: it is no Nobel prize-winner but a small and useful addition to the literature.

Also, it is about babies. Yay babies!

Babies: hard to test but fun

Babies are hard to test. This is true for several reasons: they can’t give informed consent to studies, they can’t follow instructions and they can’t give verbal feedback. But that doesn’t stop people trying. Parents can give consent for their children; behaviours can be elicited by non-verbal means and recorded in lieu of verbal feedback. And of course it’s interesting to study babies in the first place to look at the development of the motor system.

In this paper, the authors look at clinical observation of four motor behaviours: abdominal progression (i.e. crawling), sitting motility, reaching and grasping motility. There are two distinct stages in infant motor development after birth that the authors identify: primary variability and secondary variability. General movements of the whole body that don’t appear to be geared towards accomplishing a task characterize primary variability. Secondary variability is much more task-specific and can be adapted to specific situations. It’s the transitions from primary to secondary variability in various motor behaviours that the authors are interested in.

To test when their infant participants began to make adaptive movements, they tested various children at various intervals ranging from 3 months to 18 months. Different types of movements were induced– for example, trying to get children to reach for toys or crawl towards them. The movements were recorded on video and two of the study’s authors scored the videos for whether the movements showed ‘no selection’ or ‘adaptive selection’. Since I am interested mainly in reaching, here are the results from the reaching scores (Figure 4 in the paper):

Selection in infant reaching movements across development

You can see that as the age of the baby increases in months, more ‘no selection’ movements occur (hatched bars). Then between 6-8 months you start getting ‘adaptive selection’ movements (black bars), which increase significantly in frequency between 6 and 8 months and between 12 and 15 months.

When rating videos like this, the reliability of the rating is very important. The authors tested inter-rater reliability by having two raters, but also intra-rater reliability by having the same rather rate the video once and then again after a month. Mostly they found that the reliability was very high, though it seems to me that they should perhaps have had a couple more raters in there just in case. To their credit, they do admit this as a limitation of their study.

So assuming that the rating is reliable, what do we now know? Well, it’s kind of interesting that for the four behaviours observed, the onset from the video ratings is a few months later in all cases than when you do neurophysiological testing (as people have done before). That is, if you measure brain activity (see the first picture in this post!) or muscle activity, you can observe patterns of motor activity that become noticeably more synchronized way before you can observe these changes by eye.

It’s useful to know this because you can’t hook every baby that comes into your busy clinic to a set of wires to record their brain and muscle activity, nor spend hours analyzing the results from these investigations. What you can do as a busy clinician is take note of the types of movements and when the transitions appear – as the authors note at the end, it would be interesting to do this kind of study on the ages of transition in infants with high probability of developing motor disorders (such as cerebral palsy).

Overall verdict: a nice short study with some possible clinical impact.

---

Heineman, K., Middelburg, K., & Hadders-Algra, M. (2010). Development of adaptive motor behaviour in typically developing infants Acta Paediatrica, 99 (4), 618-624 DOI: 10.1111/j.1651-2227.2009.01652.x

Baby EEG image copyright © 2010 Apple Inc.

Image from paper copyright © 2009 Heineman, Middleburg & Hadders-Algra