Showing posts with label human. Show all posts
Showing posts with label human. Show all posts

Friday, 27 August 2010

Walking sub-optimally is the way forward

ResearchBlogging.orgToday we’re going to do something a little different. I’ve been posting a lot about reaching movements, because that’s what I’m most interested in, but it may surprise you to learn that humans do actually have the capacity to move other parts of their bodies as well. I know, I’m as shocked as you are… so! The paper I’m going to cover is about the regulation of step variability in walking. It’s a little longer and more complex than normal, so strap yourselves in.

Walking is a hard problem, and we’re not really sure how we do it. Like reaching, there are many muscles to coordinate in order to make a step forward. Unlike in arm reaching, these coordinated steps need to follow one another cyclically in such a way as to keep the body stable and upright while simultaneously moving it over terrain that might well be rough and uneven. Just think for a moment about how difficult that is, and what different processes might be involved in the control of such movements.

One question that remains unanswered is how we control variability in walking. It’s a simple matter to control average position or velocity, but the variation in these parameters between steps is still unexplained. It is pretty well established that over the long-term people tend to try to minimize energy costs while walking – hence the gait we learn to adopt over the first few years of life. But there’s evidence that such a seemingly “optimal” strategy is not the whole story.

Consider walking on a treadmill. What’s the primary goal of continuous treadmill walking? Well, it’s to not fall off. The researchers in the article took that idea and reasoned that because the treadmill is moving at a constant speed, the best way not to fall off is to move at a constant speed yourself. That’s not the only strategy of course – you could also do something a little more complicated like make some short, quick steps followed by some long, slow ones in sequence, which would also keep you on the treadmill.

To test how the parameters varied, the researchers used five different walking speeds. You can see this in the figure below (Figure 3 in the paper):

Human treadmill walking data with speed as percentage of preferred walking speed (PWS)

L is stride length, T is stride time and S is stride speed. So A-C in the figure show how these values change with the five different treadmill speeds – length increases, time decreases and speed increases. D-F show the variability (σ) in these different parameters. G-I show something slightly more complex: a value called α that is defined as a measure of persistence, i.e. how much or little the parameters were corrected on subsequent strides. Values of α > ½ mean that there was less correction, whereas values < ½ mean that there was more correction. So panels G-I show that variability in stride length and time were not generally corrected quickly, but that variations in stride speed were.

Read that last paragraph through again to make sure you get it. It will be important shortly!

So: now we have a measure of human walking parameters. The question is, how are these parameters produced by the motor control system? That is, what does the system care about when it initiates and monitors walking? Well, one thing we can get from the data here is that the system seems to care about stride speed, but doesn’t care about stride time and stride length individually. And if that’s the case, then as long as the coupled length and time lie on a line that defines the speed, the system should be happy. A line a bit like this (figure 2B in the paper):

Human stride parameters lie along line of constant speed

The figure shows the GEM (which stands for Goal Equivalent Manifold, essentially the line of constant speed) plotted against stride time and stride length. The red dots show some data. Right away you can see that the dots generally lie along the line. Ignore the green arrows, but do take note of the blue ones – they’re showing a measure of deviations tangent to (δT) and perpendicular to (δP) the line. Why is δT so much bigger than δP? Because perpendicular variations push you off the line and thus interfere with the goal, whereas tangential variations don’t. So the system is either not stepping off the line much in the first place or correcting heavily when it does.

Here’s one more figure (Figure 5C and D in the paper) showing the variability (σ) and persistence (α) for δT and δP :

Variability and persistence of deviations

You can see that δT is much more variable than δP, as you might expect from the shape of the data shown in the second figure. You can also see something else, however: the persistence for δP is less than ½, whereas the persistence for δT is greater than ½. Thus, the system cares very much about correcting not just stride speed but the combination of stride time and stride length that take the stride speed away from the goal speed.

Great, you may think, a lot of funny numbers to tell us that the system cares about maintaining a constant speed when it’s trying to maintain a constant speed! What do you scientists get paid for anyway? The cool thing about this paper is that the researchers are trying to figure out precisely how the brain produces these numbers. It turns out that if you just use an ‘optimal’ model that corrects for δP while ignoring δT, you don’t get the same numbers. So that can’t be it. How about if you specify in your model that you have to keep at a certain speed – say the same average speed as in the human data? That doesn’t work either. The numbers are better, but they’re not right.

The solution that seems to work best is when the deviations off the GEM line (i.e. δP) are overcorrected for. This controller is sub-optimal, so basically efficiency is being sacrificed for tight control over this parameter. Thus, humans don’t appear to simply minimize energy loss – they also perform more complex corrections depending on the task goal.

I’ve covered in a previous post the inkling that this might be the case; while we do tend to minimize energy over the long term, in the short term the optimization process is much more centred around the particular goal, and people are very good at exploiting the inherent variability in the motor system to perform the task more easily. This paper does a great job of testing these hypotheses and providing models to explain how this might happen. What I’d be interested to see in the future is an explanation of why the system is set up to overcorrect like that in the first place – is it overall a more efficient way of producing movement than just a standard optimization over all parameters? Time, perhaps, will tell.

--

Dingwell JB, John J, & Cusumano JP (2010). Do humans optimally exploit redundancy to control step variability in walking? PLoS computational biology, 6 (7) PMID: 20657664

Images copyright © 2010 Dingwell, John & Cusumano

Monday, 23 August 2010

Learning without thinking

ResearchBlogging.orgScratching around on the internet this afternoon on my first day back from holiday, I was kind of reluctant to dive straight back into taking papers apart. After all, I have spent the majority of the last three weeks drinking beer and eating pies in the UK, and the increase in my waistline has most likely been mirrored by the decrease in my critical faculties (as happens when you spend time away from the cutting edge). However, I ran across a really cool little article that reminded me just why I enjoy all this motor control stuff. So here goes nothing!

There’s been some work in recent years on the differences between implicit and explicit motor learning – that is, the kind of learning the brain does by itself, relying on cues from the environment, vs. using a well-defined strategy to perform a task. For example, learning to carry a full glass of water without spilling by just doing it and getting it wrong a lot until you implicitly work out how, or by explicitly telling yourself, “Ok, I’m going to try to keep the water as level as possible.” A fun little study on this was performed by Mazzoni and Krakauer (2006) in which they showed that giving their participants an explicit strategy in a visuomotor rotation task (reaching to a target where the reach is rotated) actually hurt their performance. Essentially they started off being able to perform the task well using the explicit strategy, which was something like ‘aim for the target to the left of the one you need to hit’. However as the task went on the implicit system doggedly learned it – and conflicted with the explicit strategy – so that the participants were making more errors at the end than at the beginning.

The paper I’m looking at today follows up on this result. Implicit error-based learning is thought to be the province of the cerebellum, the primitive, walnut-shaped bit at the back of the brain. The researchers hit upon the idea that if the cerebellum is important for implicit learning, then perhaps patients with cerebellar impairments would actually find it easier to perform the task relative to healthy control participants. To test this, they told both sets of participants to use an explicit strategy in a visuomotor rotation task, just like in the previous study, and measured their ‘drift’ from the ideal reaching movement.

Below you can see the results (Figure 2A in the paper):

Target error across movements

Open circles are all control participants, whereas filled circles are all patients. The black circles at the start show baseline performance – both groups performed pretty well and similarly. Red circles show the first couple of movements after the rotation was applied, and before participants were told to use the strategy. You can see that the participants are reaching completely the wrong way. The blue section shows reaching while using the strategy. Here’s the nice bit: the cerebellar patients are doing better than the controls, as their error is closer to zero, whereas the controls are steadily drifting away from the intended target. Magenta shows when the participants are asked to stop using the strategy and the final cyan markers show the ‘washout’ phase as both groups get back to baseline without an imposed rotation – though the patients manage much more quickly than the controls.

So it looks very much like the cerebellar patients, because their cerebellums are impaired at implicit learning, are able to perform this task better than healthy people. What’s kind of interesting is that other research has shown that cerebellar patients aren’t very good at forming explicit strategies on their own, which is something that healthy people do without even thinking about it. The tentative conclusion of the researchers is that it’s not so much that the implicit and explicit systems are completely separate, but that the implicit system can inform the development of explicit strategies – which is impaired if the cerebellum isn’t working properly.

I didn’t like everything in this paper. I was particularly frustrated with the method section, which talks about the kind of screen they used. I wasn’t sure whether the images shown to participants were on a screen in front of them or whether the screen was placed over the workspace in a virtual-reality setup; it was unclear. There was also a sentence claiming that the cerebellar patients’ performance was ‘less’ than the controls, when in fact it was better. Other than these minor niggles though, it’s a really nice paper showing a very cool effect.

--

Taylor JA, Klemfuss NM, & Ivry RB (2010). An Explicit Strategy Prevails When the Cerebellum Fails to Compute Movement Errors Cerebellum PMID: 20697860

Images copyright © 2010 Taylor, Klemfuss & Ivry

Wednesday, 21 July 2010

Lazy beats sloppy

ResearchBlogging.orgToday I give in to my inner lazy person (who is, in fact, quite similar to my outer lazy person) and talk about a paper after I’ve just been to a journal club, rather than before. The advantages are that I was reading the paper anyway and I’ve just had an hour of discussion about it so I don’t actually have to think of things to say about it myself. The disadvantages are that, um, it’s lazy? And that’s bad? Perhaps. But I still think it’s better, as we shall see, than sloppy.

The premise of the paper harks back to my earlier post on visual dominance and multisensory integration. It’s been well known in the literature for a while that if you flash a couple of lights while at the same time playing auditory beeps, an interesting little illusion occurs. If participants are asked to count the number of flashes, and they’re the same as the number of beeps, then they almost always get the answer right. But if there are two flashes and one beep, or one flash and two beeps, then they’re much more likely to say there was one or two flashes respectively. The figure below (Figure 1 in the paper) illustrates this:

Illusion when the hand is at rest

In the figure, you can see that the bar for one beep and one flash (far left black bar) and two beeps and two flashes (far right white bar) are at heights 1 and 2 respectively, which illustrates the number of perceived flashes. That is, the number of perceived flashes is just what you’d expect – one for one flash, two for two flashes. However the middle bars, which show the one beep/two flash and two beep/one flash conditions, are at intermediate heights, showing the presence of the illusion. This figure actually demonstrates the first problem with the paper, which is that the figures are pretty difficult to interpret. I know I wasn’t alone in the lab at finding them confusing anyway.

What the authors were interested in is whether a goal-directed movement could alter visual processing, and they used the illusion to probe this. Participants had to make point-to-point reaches from a start point to a target. During the reach their susceptibility to the illusion was tested at the target point – but the test began a variable time away from the start of the movement, between 0 and 250 ms. That is: sometimes the flashes and beeps occurred at the start of the movement when the arm was moving slowly, sometimes when it was half way through and thus moving faster, and sometimes at the end when it was moving slowly again.

The experimenters found that, when there were two flashes and one beep, participants were less likely to see an illusion during the middle part of their movement than during the beginning and end. That is, they were more likely to get it right when they were moving faster. The trouble starts when you look a bit closer at the effect they’ve got – it’s pretty weak. There seems to be a lot of noise in the data, and the impression that they’re grasping at straws a little isn’t helped by the aforementioned sloppy figures.

Having said that, the stats do hold up. What might be the explanation for this kind of effect? The multisensory integration argument is that the sensory modality (e.g. vision) with the least noise should be the one that is prioritized. So when the arm is moving quickly, there’s more noise in the motor system compared with the visual system and thus you’re better at determining how many flashes there are. I’m not sure I buy this; the illusion is about the visual and auditory systems, after all. I’m not sure I get why you’d be better at detecting an illusion when you’re moving than when you’re not moving, for example. The authors claim that the limb movement “requires extensive use of visual information” but again I’m not so sure. When we reach for objects we generally take note of where our arm is, look at the object and then move the arm to the object without looking at it again.

So, a weak effect that isn’t well explained. That wouldn’t be so bad, but the clarity of the paper is also lacking. There’s also the question of why, if they had such a weak effect, they didn’t do another experiment or two to tease out what was really going on. I do think the slightly larger problem here is the review process at PLoS. It’s open access so anyone can read it free online, which I am very much in favour of, but it’s biased towards only reviewing the methods and results of a paper rather than the introduction/discussion. I go back and forth over whether this is a good thing. Some journals reject papers based on novelty (a.k.a. coolness) whereas it appears that PLoS strives to accept well-performed science regardless of how ‘interesting’ (and I use the term in quotes advisedly) the result is.

In this case I think that, while the science is good, it would be a much better paper if it went a bit more into depth with a couple of extra experiments exploring these effects more carefully – and if it had figures that were perhaps a bit easier to comprehend.

--

Tremblay L, & Nguyen T (2010). Real-time decreased sensitivity to an audio-visual illusion during goal-directed reaching. PloS one, 5 (1) PMID: 20126451

Image copyright © 2010 Tremblay & Nguyen

Monday, 19 July 2010

Far out is not as far out as you think

ResearchBlogging.orgProprioception is the sense of where your body is in space. It is one of several ways the brain uses sensory information to figure out where your limbs and the rest of you are, along with vision and the semicircular ear canals of the vestibular system (though these are more important in balance). Proprioception is defined as information from the lengths of muscles, the location of joints and receptors in the skin that tell us how much we have stretched it.

How, if at all, does the accuracy and precision of this information vary across different tasks and limb configurations? To test this, the authors of today’s study got their participants to perform three experimental tasks that involved matching perceived limb position without being able to see their arm. In the first task, participants used a joystick to rotate a virtual line on a screen positioned over their limb until they decided that it was in the same direction as their forearm. In the second task, they used a joystick to move a dot around until they decided that it was over their index finger. In the third task, they again saw a virtual line on the screen, but this time they had to actively move their forearm until they decided they were in line with it.

The results were kind of interesting: in all three cases, participants tended to overestimate the position of their limbs when they were at extremes; i.e. when they were more flexed they assumed they were even more flexed, and when they were more extended they assumed they were even more extended. This is quite confusing to explain, but the figure below (Figure 4A in the paper) should help:

Estimates of arm position from one participant

The black lines are the actual position of the arm of a representative participant in task 1, with flexion on the left and extension on the right. Blue lines are the participant’s estimates of arm position, and the red line is the average of the estimates. You can see that when the arm is flexed the participant guesses that it’s more flexed than it actually is, with the corresponding result for when the arm is extended. The researchers found no differences in accuracy between the three tasks, but they did find differences in precision – participants were much more precise, i.e. the spread of their responses was lower, in the passive fingertip task and the active elbow movement task (tasks 2 and 3).

So what? Well, these results give us an insight into how proprioception works. The authors argue that the bias towards thinking you’re more flexed/extended than you really are comes from the overactivity of joint and skin receptors as the limb reaches its extreme positions. Why might these receptors become overactive at extreme positions? Possibly because it allows us to sense ahead of time when we’re getting to a point of movement that is mechanically impossible for the limb to perform, either because we’re trying to flex it too much or we’re trying to straighten it too much. Push too hard at either extreme – muscles are quite strong – and you could damage the limb. Better for the system to make you stop pushing earlier by giving you a signal that you’re further along than you thought. I think it’s a nice hypothesis.

I quite like this study, as it’s another one of those not-wildly-exciting-but-useful-to-know kinds of papers. While the wildly exciting stuff is great, I think that too often the worthy, low-key stuff like this is unfairly overshadowed. Science is about huge leaps and paradigm shifts much less than it’s about the slow grind of data making possible incremental progress on various questions. And I’m not just saying that because that’s what all my papers are like!

---

Fuentes, C., & Bastian, A. (2009). Where Is Your Arm? Variations in Proprioception Across Space and Tasks Journal of Neurophysiology, 103 (1), 164-171 DOI: 10.1152/jn.00494.2009

Image copyright © 2010 The American Physiological Society

Monday, 12 July 2010

It's better to keep what works than to try something new

ResearchBlogging.orgIt seems I just can’t leave this topic alone. Last week I blogged about a paper on use-dependent learning, which discussed how it’s not only the errors you make that contribute to your learning of a motor task, but that your movements become more similar to movements you’ve already made. Today’s paper deals with something similar, but from a different perspective: that of optimal feedback control.

I discussed OFC in another previous post, but a quick recap of the theory is that to make a movement the brain needs to optimize the motor commands it sends out to control both effort (or noise in the system) and error (i.e. how far off the target you are). So an optimal solution to reaching for a pint in the pub should involve the minimization of both error and effort to acquire the target in a timely manner.

In the study I’ll discuss today, the authors make the claim that if this optimization happens at all it is local, not global. That is, people tend not to optimize to find the best possible solution, but rather they optimize until they find one that works well enough and then stick to it – even when there’s a better solution overall. To investigate this, the experimenters attached participants to a robotic wrist device that pushed their wrist back and forth at a certain frequency. Participants saw a visual target on the screen and a cursor representing their wrist amplitude; they had to keep the amplitude below a certain level to keep the cursor in the target.

The task was rather cunningly set up so that the participants could perform it in one of two ways: either by co-contracting their wrist muscles strongly against the perturbation, or by relaxing the muscles, which obviously requires less effort. (For an analogy, imagine riding a bike down a cobbled hill; you can either make the handlebars really stiff or relax and let the jolting push you around a bit, but if you do something in the middle the jolting will make you fall over.) Participants were either given ‘free’ trials where they could choose which strategy to use, or ‘forced’ trials where they were pushed into a certain strategy at the start of the task by visual feedback.

After being given three ‘free’ trials they were then given three ‘forced’ trials in the strategy they didn’t pursue the first time, so if they had freely chosen the ‘relaxed’ strategy, they were pushed into the ‘co-contract’ strategy. Then they were given three more ‘free’ trials and then three more ‘forced’ trials in the other strategy, and finally three more ‘free’ trials. You can see a representative participant in the figure below (part of Figure 2A in the paper):


Co-activation in one representative participant across time

Here the dark areas are areas of low movement amplitude at certain levels of maximum voluntary co-activation – i.e. they’re the areas you want to stay in to perform the task correctly. If you co-contract too much or too little, you’ll end up in the white area in the middle and you’ll fail the task. The traces show the five sets of trials: the first ‘free’ set is white, then the first ‘forced’ set is blue, then the next ‘free’ set is green, then the next ‘forced’ set is yellow, and the final ‘free’ set is red. What you can see clearly from this graph is that participants tended to follow in the ‘free’ trials where they’d been pushed in the previous set of ‘forced’ trials, regardless of whether it was actually a lower-effort solution. That is, subjects tended to do what they’d done before, whether or not it was a better solution.

Sound familiar? Like in use-dependent learning, participants tended to do things they’d already done rather than make a new solution. And again, it makes sense to me that this would happen. The authors in this paper argue that the brain is forming ‘motor memories’ that are also used in the optimization process, and that the optimization itself is thus local and not global. I guess I can buy that, but only in the sense that these ‘motor memories’ are patterns of activation that have been learnt by the network. It takes metabolic energy to create new connections and learn a new pattern, so any optimization process would have to take this into account along with error and effort.

It might even explain the existence of straight line movements in non-optimal situations; if you’ve moved in straight lines all your life because it’s an efficient and effective way to move, then if you’re suddenly placed in an environment where moving in a straight line is more effortful and therefore non-optimal, it’s going to be very difficult to unlearn that deep network optimization you’ve been creating your whole life.

There’s more to the paper too, I think it’s great.

---

Ganesh, G., Haruno, M., Kawato, M., & Burdet, E. (2010). Motor memory and local minimization of error and effort, not global optimization, determine motor behavior Journal of Neurophysiology DOI: 10.1152/jn.01058.2009

Image copyright © 2010 The American Physiological Society

Thursday, 8 July 2010

Motor learning changes where you think you are

ResearchBlogging.orgI’ve covered both sensory and motor learning topics on this blog so far, and here’s one that very much mashes the two together. In earlier posts I have written about how we form a percept of the world around us, and about our sense of ownership of our limbs. In today’s paper the authors investigate the effect of learning a motor task on sensory perception itself.

They performed a couple of experiments, in slightly different ways, which essentially showed the same result – so I’ll just talk about the first one here. Participants had to make point-to-point reaches while holding a robotic device in three phases (null, force field and aftereffect) separated by perceptual tests designed to assess where they felt their arm to be. The figure below (Figure 1A in the paper) shows the protocol and the reaching error results:

Motor learning across trials

In the null phase, as usual, participants reached without being exposed to a perturbation. In the force field phase, the robot pushed their arm to the right or to the left (blue or red dots respectively), and you can see from the graph that they made highly curved movements to begin with and then learnt to correct them. In the aftereffect phase, the force was removed, but you can still see the motor aftereffects from the graph. So motor learning definitely took place.

But what about the perceptual tests? It turns out that participants’ estimation of where their arm was changed after learning the motor task. In the figure below (Figure 2B and 2C in the paper) you can see in the left graph that after the force field (FF) trials, hand perception shifted in the opposite direction to the force direction. [EDIT: actually it's in the same direction; see the comments section!] This effect persisted even after the aftereffects (AE) block.


Perceptual shifts as learning occurs

What I think is even more interesting is the graph on the right. It shows not only the right and left (blue and red) hand perceptions, but also the hand perception after 24 hours (yellow) – and, crucially, the hand perception when participants didn’t make the movements themselves but allowed the robot to move them (grey). As you can see, there’s no perceptual shift. It only appears to happen when participants make active movements through the force field, which means that the change in sensory perception is closely linked to learning a motor task.

In some ways this isn’t too surprising, to me at least. In some of my work with Adrian Haith (happily cited by the authors!), we developed and tested a model of motor learning that requires changes to both sensory and motor systems, and showed that force field learning causes perceptual shifts in locating both visual and proprioceptive targets; you can read it free online here. The work in this paper seems to shore up our thesis that the motor system takes into account both motor and sensory errors during learning.

Some of the work I’m dabbling with at the moment involves neuronal network models of motor learning and optimization. This kind of paper, showing the need for changes in sensory perception during motor learning, throws a bit of a cog into the wheels of some of that. As it stands the models tend to assume sensory input as static and merely change motor output as learning progresses. Perhaps we need to think a bit more carefully about that.

---

Ostry DJ, Darainy M, Mattar AA, Wong J, & Gribble PL (2010). Somatosensory plasticity and motor learning. The Journal of Neuroscience, 30 (15), 5384-93 PMID: 20392960

Images copyright © 2010 Ostry, Darainy, Mattar, Wong & Gribble

Monday, 5 July 2010

Baby (not quite) steps

ResearchBlogging.orgMany non-scientists misunderstand the basic way science works. While there are indeed huge discoveries that fundamentally change the way we think about things, the vast majority of the time published papers are a steady plod onwards, adding in very modest amounts to the staggering array of human knowledge. Often seismic shifts in scientific opinion don’t come from great discoveries but from many scientists reading the literature and arguing among themselves and coming to different conclusions from the slow-burn of new thoughts and experiments. Such is the case with this paper: it is no Nobel prize-winner but a small and useful addition to the literature.

Also, it is about babies. Yay babies!

Babies: hard to test but fun

Babies are hard to test. This is true for several reasons: they can’t give informed consent to studies, they can’t follow instructions and they can’t give verbal feedback. But that doesn’t stop people trying. Parents can give consent for their children; behaviours can be elicited by non-verbal means and recorded in lieu of verbal feedback. And of course it’s interesting to study babies in the first place to look at the development of the motor system.

In this paper, the authors look at clinical observation of four motor behaviours: abdominal progression (i.e. crawling), sitting motility, reaching and grasping motility. There are two distinct stages in infant motor development after birth that the authors identify: primary variability and secondary variability. General movements of the whole body that don’t appear to be geared towards accomplishing a task characterize primary variability. Secondary variability is much more task-specific and can be adapted to specific situations. It’s the transitions from primary to secondary variability in various motor behaviours that the authors are interested in.

To test when their infant participants began to make adaptive movements, they tested various children at various intervals ranging from 3 months to 18 months. Different types of movements were induced– for example, trying to get children to reach for toys or crawl towards them. The movements were recorded on video and two of the study’s authors scored the videos for whether the movements showed ‘no selection’ or ‘adaptive selection’. Since I am interested mainly in reaching, here are the results from the reaching scores (Figure 4 in the paper):

Selection in infant reaching movements across development

You can see that as the age of the baby increases in months, more ‘no selection’ movements occur (hatched bars). Then between 6-8 months you start getting ‘adaptive selection’ movements (black bars), which increase significantly in frequency between 6 and 8 months and between 12 and 15 months.

When rating videos like this, the reliability of the rating is very important. The authors tested inter-rater reliability by having two raters, but also intra-rater reliability by having the same rather rate the video once and then again after a month. Mostly they found that the reliability was very high, though it seems to me that they should perhaps have had a couple more raters in there just in case. To their credit, they do admit this as a limitation of their study.

So assuming that the rating is reliable, what do we now know? Well, it’s kind of interesting that for the four behaviours observed, the onset from the video ratings is a few months later in all cases than when you do neurophysiological testing (as people have done before). That is, if you measure brain activity (see the first picture in this post!) or muscle activity, you can observe patterns of motor activity that become noticeably more synchronized way before you can observe these changes by eye.

It’s useful to know this because you can’t hook every baby that comes into your busy clinic to a set of wires to record their brain and muscle activity, nor spend hours analyzing the results from these investigations. What you can do as a busy clinician is take note of the types of movements and when the transitions appear – as the authors note at the end, it would be interesting to do this kind of study on the ages of transition in infants with high probability of developing motor disorders (such as cerebral palsy).

Overall verdict: a nice short study with some possible clinical impact.

---

Heineman, K., Middelburg, K., & Hadders-Algra, M. (2010). Development of adaptive motor behaviour in typically developing infants Acta Paediatrica, 99 (4), 618-624 DOI: 10.1111/j.1651-2227.2009.01652.x

Baby EEG image copyright © 2010 Apple Inc.

Image from paper copyright © 2009 Heineman, Middleburg & Hadders-Algra

Wednesday, 30 June 2010

Errors and use both contribute to learning

ResearchBlogging.orgLearning how to make a reaching movement is, as I’ve said before, a very hard problem. There are so many muscles in the arm and so many ways we can get from one point to another that there are for all intents and purposes an infinite set of ways the brain could choose to send motor commands to achieve the same goal. And yet what we see consistently from people is a very stereotyped kind of movement.

How do we learn to make reaching movements in the presence of destabilizing perturbations? The standard way of thinking about this assumes that if you misreach, your motor system will notice the error and get better next time, whether it’s through recalibration of the sensory system or through a new cognitive strategy to better achieve the goal. But this paper from Diedrichsen et al. (2010) postulates another learning mechanism than error-based learning: something they call use-dependent learning.

The basic idea is that if you’re performing a task, like reaching to an object straight ahead, and you’re constantly getting pushed off to the side, you’ll correct for these sideways perturbations using error-based learning. But you’re also learning to make movements in the non-perturbed direction, and the more you make these movements the more experience you have with making these kinds of movements, so each movement becomes more similar to the last.

The authors demonstrate this with some nice experiments using a redundant movement task – rather than moving a cursor to a target as in standard motor control tasks, participants had to move a horizontal bar up the screen to a horizontal bar target. The key thing is that it was only the vertical movement that made the bar move; horizontal movements had no effect. In the first experiment, participants initially reached to the bar before being passively moved by a robotic system in one of two directional tilts (left or right) and were then allowed to move by themselves again. The results are below (Figure 1 in the paper):


Redundant reaching task

You can see that after the passive movement was applied, the overall angle changed depending on whether it was to the left (blue) or right (red). Remember that the tilt was across the task-redundant (horizontal) dimension, so it didn’t cause errors in the task at all! Despite this, participants continued to reach in the way that they’d been forced to do after the passive movement was finished – demonstrating use-dependent learning.

To follow this up, the authors did two more experiments. The first showed that error-based and use-dependent learning are separate processes and occur at the same time. They used a similar task but this time rather than a passive movement participants made active reaches in a left- or right-tilting ‘force channel’. This time the initial angle results showed motor aftereffects that reflected error-based learning, while the overall angle showed similar use-dependent effects as in the first experiment.

Finally they investigated use-dependent learning in a perturbation study. As participants moved the bar toward the target they had to fight against a horizontal force that was proportional to their velocity (i.e. it got bigger as they went faster). Compared to a ‘standard’ perturbation study (a reach to a target, where participants could see their horizontal error) the horizontal errors weren’t corrected after learning. However, the initial movement directions in the redundant task were in the direction of the force field – meaning that as participants learnt the task the planned movement direction changed through use-dependent learning.

I think this is a really cool idea. Most studies focus on error as the sole basis for driving motor learning, but thinking about use-dependent learning makes sense because of what we know about how the brain makes connections through something called Hebbian learning. Basically, though an oversimplification: ‘what fires together, wires together’, which means that connections tend to strengthen if they are used a lot and weaken if they are not. So it seems reasonable (to me at least!) that if you make a movement, you’re more likely to make another one like it than come up with a new solution.

It also might explain something about optimal feedback control that I’ve been thinking about for a while since seeing some work from Paul Gribble’s lab: we often talk about the motor system minimizing the energy required to perform a reach, but their work has shown pretty conclusively that the motor system prefers straight reaches even if the minimum energy path is decidedly not straight. There must therefore be some top-down mechanism that prioritises ‘straightness’ in the motor system, even if it’s not the most ‘optimal’ strategy for the task at hand.

Lots to chew over and think about here. I haven’t even covered the modelling work the authors did, but it’s pretty nice.

---

Diedrichsen J, White O, Newman D, & Lally N (2010). Use-dependent and error-based learning of motor behaviors. Journal of Neuroscience, 30 (15), 5159-66 PMID: 20392938

Image copyright © 2010 Diedrichsen, White, Newman & Lally

Friday, 25 June 2010

You're only allowed one left hand

ResearchBlogging.orgIn previous posts I’ve asked how we know where our hands are and how we combine information from our senses. Today’s paper covers both of these topics, and investigates the deeper question of how we incorporate this information into our representation of the body.

Body representation essentially splits into two parts: body image and body schema. Body image is how we think about our body, how we see ourselves; disorders in body image can lead to anorexia or myriad other problems. Body schema, on the other hand, is how our brain keeps track of the body, below the conscious level, so that when we reach for a glass of water we know where we are and how far to go. There’s some fascinating work on body ownership and embodiment but you can read about that in the paper, as it’s open access!

The study is based on a manipulation of the rubber hand illusion, a very cool perceptual trick that’s simple to perform. First, find a rubber hand (newspaper inside a rubber glove works well). Second, get a toothbrush, paintbrush, or anything else that can be used to produce a stroking sensation. Third, sit your experimental participant down and stroke a finger on the rubber hand while simultaneously stroking the equivalent finger on the participant’s actual hand (make sure they can’t see it!). These strokes MUST be synchronous, i.e. applied with the same rhythm. The result, after a little while, is that the participant starts to fell like the rubber hand is actually their hand! It’s a really fun effect.

There are of course limitations of the rubber hand illusion – a fake static hand isn’t the best thing for eliciting illusions of body representation, as it’s obviously fake, no matter how much you think the hand is yours. Plus it’s hard to do movement studies with static hands. The researchers got around this problem by using a camera/projection system to record an image of their participant’s hand and playing it back in real time. They got their participants to actively stroke a toothbrush rather than having the stroking passively applied to them, and then showed two images of their hand to the left and right of the actual (unseen) hand position.

The left, right or both hands were shown synchronously stroking; the other hand in the first two conditions was shown asynchronously stroking by delaying the feedback from the camera. The researchers asked through questionnaires whether participants felt they ‘owned’ each hand. You can see these results in the figure below (Figure 3B in the paper):

Ownership rating by hand stroke condition

For the left-stroke (LS) and right-stroke (RS) conditions, only the left or right image respectively was felt to be ‘owned’ whereas in the both-stroke (BS) condition, both hands were felt to be ‘owned’. This result isn’t too surprising; it’s a nice strong replication of the rubber hand results other researchers have found. Where it gets interesting is that when participants were asked to make reaches to a target in front of them they tended to reach in the right-stroke and left-stroke conditions as if the image of the hand they felt they ‘owned’ was actually theirs. That is, they made pointing errors consistent with what you would see if their real hand had been in the location of the image.

In a final test, participants in the both-stroke condition were asked to reach to a target in the presence of distractors to its left and right. Usually people will attempt to avoid distractors, even when it’s just an image or a dot that they are moving around a screen, and the distractors are just lights. However in this case participants had no qualms about moving one of the images through the distractors to reach the target with the other, even though they claimed ‘ownership’ of both.

This last point leads to an interesting idea the authors explore in the discussion section. While it seems to be possible to incorporate two hands simultaneously into the body image, this doesn’t appear to translate to the body schema. So you might be able to imagine yourself with extra limbs, but when it comes to actively move them the motor system seems to pick one and go with that, ignoring the other one (even when it hits an obstacle).

To my mind this is probably a consequence of the brain learning over many years how many limbs it has and how to move them efficiently, and any extra limbs it may appear to have at the moment can be effectively discounted. It is interesting to see how quickly the schema can adapt to apparent changes in a single limb however, as shown by the pointing errors in the RS and LS movement tasks.

I wonder if we were born with more limbs, would we learn gradually how to control them all over time? After all, octopuses manage it. Would we still see a hand dominance effect? (I’m not sure if octopuses show arm dominance!) And would we, when a limb was lost in an accident, still experience the ‘phantoms’ that amputees report? I haven’t touched on phantoms this post, but I’m sure I’ll return to them at some point.

Altogether a simple but interesting piece of work, which raises lots of interesting questions, like good science should. (Disclaimer: I know the first and third authors of this study from my time in Nottingham. That wouldn't stop me saying their work was rubbish if it was though!)

---

Newport, R., Pearce, R., & Preston, C. (2009). Fake hands in action: embodiment and control of supernumerary limbs Experimental Brain Research DOI: 10.1007/s00221-009-2104-y

Image copyright © 2009 Newport, Pearce & Preston

Wednesday, 23 June 2010

The cost of uncertainty

ResearchBlogging.orgBack from my girlfriend-induced hiatus and onto a really interesting paper published ahead of print in the Journal of Neurophysiology. This work asks some questions, and postulates some answers, very similar to the line of thinking I’ve been going down recently – which is, of course, the main reason I find it interesting! (The other reason is that they used parabolic flights. Very cool.)

One theory of how the brain performs complex movements in a dynamical environment – like, say, lifting objects – is known as optimal feedback control (OFC). The basic idea is that the brain makes movements that are optimized to the task constraints. For example, to lift an object, the control system might want to minimize the amount of energy used* and at the same time lift the object to a particular position. In OFC we combine these constraints into something called a cost function: how much the action ‘costs’ the system to perform. To optimize the movement, the system simply works to reduce the total cost.

But where does the system get information about the limb and the task from in the first place so as to optimize its control? There are two sources for knowledge about limb dynamics. The most obvious is reactive: feedback from the senses, from both vision and proprioception (the sense of where the arm is in space). But feedback takes a while to travel to the brain and so another source is needed: a predictive source of knowledge, an internal model of the task and limb dynamics. The predictive and reactive components can be combined in an optimal fashion to form an estimate of the state of the limb (i.e. where it is and how fast it’s going). This ‘state estimate’ can then be used to calculate the overall cost of the movement.

In today’s paper the authors argue that at the start of a new task, a new internal model has to be learnt, or an old one modified, to deal with the new task demands. So far so uncontroversial. What’s new here is the claim that the cost function being optimized for actually changes when dealing with a new task – because there is higher uncertainty in the internal prediction so the system is temporarily more reliant on feedback. They have some nice data and models to back up their conclusion.

The task was simple: participants had to grip a block and move it up or down from a central position while their position and grip force was recorded. After they’d learnt the task at normal gravity, they had to perform it in microgravity during a parabolic flight, which essentially made their arm and the object weightless. Their grip force increased markedly even though they now had a weightless object, and kinematic (e.g. position, velocity) measures changed too; movements took more time, and the peak acceleration was lower. Over the course of several trials the grip force decreased again as participants learnt the task. You can see some representative kinematic data in the figure below (Figure 4 in the paper):

Kinematic data from a single participant


Panels A-D show the average movement trace of one participant in normal (1 g) and microgravity (0 g) conditions, while panels E and F show the changes in acceleration and movement time respectively. The authors argue that the grip force changes at the beginning of the first few trials point towards uncertainty in the internal prediction, which results in the altered kinematics.

To test this idea, they ran a simulation based on a single-joint model of the limb using OFC and the optimal combination of information from the predictive system and sensory feedback. What they varied in this model was the noise, and thus the reliability, in the predictive system. The idea was that as the prediction became less reliable, the kinematics should change to reflect more dependence on the sensory feedback. But that's not quite what happened, as you can see from the figure below (Figure 8 in the paper):

Data and simulation results


Here the graphs show various kinematic parameters. In black and grey are the mean data points from all the participants for the upward and downward movements. The red squares show the parameters the simulation came up with when noise was injected into the prediction. As you can see, they're pretty far off! So what was the problem? Well, it seems that you need to change not only the uncertainty of the prediction but also the cost function that is being optimized. The blue diamonds show what happens when you manipulate the cost function (by increasing the parameter shown as alpha); suddenly the kinematics are much closer to the way people actually perform.

Thus, the conclusion is that when you have uncertainty in your predictive system, you actually change your cost function while you're learning a new internal model. I find this really interesting because it's a good piece of evidence that uncertainty in the predictive system feeds into the selection of a new cost function for a movement, rather than the motor system just sticking with the old cost function and continuing to bash away.

It's a nice paper but I do wonder, why did the authors go to all the trouble of using parabolic flights to get the data here? If what they're saying is true and any uncertainty in the internal model/predictive system is enough to make you change your cost function, this experiment could have been done much more simply – and for much longer than the 30 trials they were able to do under microgravity – by just using a robotic system. Perhaps they didn't have access to one, but even so it seems a bit of overkill to spend money on parabolic flights which are so limited in duration.

Overall though it's a really fun paper with some interesting and thought-provoking conclusions.

*To be precise there is some evidence that it's not the amount of energy used that gets minimized, but the size of the motor command itself (because a bigger command has more variability due to something called signal-dependent noise... I'm not going to go into that though!).

---

Crevecoeur, F., McIntyre, J., Thonnard, J., & Lefevre, P. (2010). Movement Stability under Uncertain Internal Models of Dynamics Journal of Neurophysiology DOI: 10.1152/jn.00315.2010

Images copyright © 2010 The American Physiological Society

Wednesday, 16 June 2010

Where you look affects your judgement

ResearchBlogging.orgOur ability to successfully interact with the environment is key to our survival. Much of my work involves figuring out how the brain sends the correct commands to the upper limb that allow us to control it and reach for objects around us. Considering how complex the musculature of the arm is, and how ever-changing the world is around us, this is a non-trivial task. One fundamental question that needs to be solved by the brain’s control system is: How do you know where something is relative to your hand?

It’s no good sending a complex set of commands to reach for an object if you don’t know how to relate where your hand is right now to where the object is. There are several theories as to how the brain might perform this task. In one theory, the object’s location on the retina is translated into body-centred coordinates (i.e. where it is in location to the body centre) by adding the eye position and the head position sequentially. In another, the object is stored in a gaze-centred reference frame that has to be recalculated after every movement.

There’s already some evidence for the second account – we tend to overestimate how far in our peripheral vision a target sits, and so we actually make pointing errors when asked to reach to where we thought it was. So it seems as if we dynamically update our estimate of where a target is when we are asked to make active movements towards them. In this paper the researchers were interested in whether this was also true for perceptual estimates. That is, when you are simply asked to state the position of a remembered target, does that also depend on gaze shift?

To answer this question, the authors performed an experiment with two different kinds of targets: visual and proprioceptive. (If you’ve been paying attention, you’ll know that proprioception is the sense of where your body is in space.) The visual target was just an LED set out in front of the participant; the proprioceptive target was the participant’s own unseen hand moved through space by a robot. Before the target appeared, participants were asked to look at an LED either straight in front of them, or 15˚ to the left or right. The targets would then appear (or the hand would be moved to the target location), disappear (or the hand would be moved back), and then the participant’s hand would be moved out again to a comparison location. They then had to judge whether their current hand location was to the left or right of the remembered target.

Here’s where it gets interesting. Participants were placed into one of two conditions: static or dynamic. In the static condition, participants kept their gaze fixed on an LED to the left, to the right or straight ahead of their body midline. In the dynamic condition, they gazed straight ahead and were asked to move their eyes to the left or right LED after the target had disappeared. In a gaze-dependent system, this should introduce errors as the target location relative to the hand would be updated relative to gaze after the eye movement. In a gaze-independent system, no errors should be evident as the target position was already calculated before the eye movement.

Bias in judgements of visual and proprioceptive targets

The figure above (Figure 4a and 4b in the paper) shows the basic results. Grey is the right fixation while black is the left fixation; circles show the static condition while crosses show the dynamic condition. You can immediately see that in both conditions, for both targets, participants made estimation errors in the opposite direction to their gaze: errors to the left for right gaze, and errors to the right for left gaze. So it does look like perceptual judgements are coded and updated in a gaze-centred reference frame. To hammer home their point, the next figure (Figure 5 in the paper) shows the similarity between the judgements in the static and dynamic conditions:

Static vs. dynamic bias

As you can see, the individual judgements match up very closely indeed, which gives even more weight to the gaze-centred account.

So what does this mean? Well: it means that whenever you move your eyes, whether you are planning an action or not, your brain’s estimation of where objects are in space relative to your limbs is remapped. The reason that the errors this generates don’t affect your everyday life is that usually when you want to reach for an object you will look directly at it anyway, which eliminates the problems of estimating the position of objects on the periphery of your vision.

I enjoyed reading this paper – and there is much more in there about how the findings relate to other work in the literature – but it was a bit wordy and hard to get through at times. One of the most difficult things about writing, I’ve found, is to try and maintain the balance between being concise and containing enough information so that the result isn’t distorted. Time will tell how I manage that on this blog!

---

Fiehler, K., Rösler, F., & Henriques, D. (2010). Interaction between gaze and visual and proprioceptive position judgements Experimental Brain Research, 203 (3), 485-498 DOI: 10.1007/s00221-010-2251-1

Images copyright © 2010 Springer-Verlag

Friday, 11 June 2010

Moving generally onward

ResearchBlogging.orgThink of a pianist learning how to play a sequence of chords on the piano in one position, and then playing the same sequence of chords three octaves higher. Her arms and hands will be in different positions relative to her trunk, but she’ll still be able to play the same notes. We call this ability to transfer learnt motor skills from one part of the workspace to another generalization.

In today’s paper, the authors investigated how generalization works when you are learning two things at the same time, in different areas of space. The observation method they chose was amplitude gains - reaching to a target in a particular direction and modifying the feedback to increase or reduce the gain. So, for example, for a gain of 1.5 participants would have to reach 1.5 times further than normal to hit the target, and for a gain of 0.5 they would have to reach half as far as normal.

The researchers trained their participants on two gains (1.5 and 0.8) simultaneously for two different targets, and then tested how the reaches generalized to some untrained targets:


Trained and untrained targets


The thick circles in the figure show the trained targets and the thin circles show the untrained targets. How the participants reached to the untrained targets after training on the trained targets can be used as a measure of how well they generalized their movements.

One obvious problem with generalization when learning two things at once is that the two generalization patterns might conflict, and prevent you learning one of the gains at all. But the results weren’t that simple. The participants quite happily learnt both gains, and their generalization varied smoothly based on distance from the training directions. The result is illustrated by this rather complex-looking graph:


Generalization based on target direction


Don’t be put off though. Just look at the thick black trace, which is the average of all the other black traces. Along the x-axis of the graph is direction in degrees, and along the y-axis is the observed gain, i.e. how far participants reached to the target at that particular position. You can see that at the trained targets at 60˚ (gain 0.8) and 210˚ (gain 1.5) the observed gain is close to the training gain, and as I said above, it varies smoothly between the two as you look at the different untrained targets.

So it’s possible to learn two gains at once, and the amount you generalize varies across the workspace in a smooth way. But scientists aren’t scientists if they’re satisfied with a simple answer. They wanted to know: why’s that? What’s the best model that explains the data, and that is consistent with what we know about the brain? The authors proposed five possible models, but the one they found fit the data best was a relative spatial weighting model.

The idea behind this model is fairly simple. We can quite easily find a generalization pattern from a single gain, and this model combines the two single-gain patterns based on the relative distance between the two training directions.

What does this mean? Well: it gives credence to the idea that the motor system adapts to differing visuomotor gains using something called a ‘mixture-of-experts’ system. Each ‘expert’ module learns one of the gains, and then combines them based on an easily-assessed property of the workspace (in this case, the angular distance between training targets). This modular idea of how the brain works has grown in popularity in the last decade, and this paper is the latest to demonstrate that there appear to be distinct systems that learn to be extremely good at one thing and are then combined and weighted together to deal with complex tasks.

That’s it for this week! Today’s post was under 700 words, which beats the first (~950) and the second (~1150!). I’m going to try to keep them shorter rather than longer, but I could do with some feedback on my writing. Comments very welcome.

---

Pearson, T., Krakauer, J., & Mazzoni, P. (2010). Learning Not to Generalize: Modular Adaptation of Visuomotor Gain Journal of Neurophysiology, 103 (6), 2938-2952 DOI: 10.1152/jn.01089.2009

Images copyright © 2010 The American Physiological Society

Tuesday, 8 June 2010

Mood, music and movement

ResearchBlogging.orgWe all know that music can have an effect on our mood (or, to use a mildly annoying linguistic contrivance, can affect our affect). And being in a better mood has been consistently shown to improve our performance on cognitive tasks, like verbal reasoning; the influence of serene music on such tasks is also known as the 'Mozart effect'. What's kind of interesting is that this Mozart effect has also been shown to be effective on motor tasks, like complex manual tracking.

In the last post I talked a bit about motor adaptation - recalibrating two sensory sources so that the overall percept matches up with the incoming information. Say you're reaching to a target under distorted vision, like wearing goggles with prisms in them that make it look like you're reaching further to the right than you actually are; this is known as a visual perturbation. When you reach forward, the sense of where you are in space (proprioception) sends signals to the brain telling you your arm's gone forward. However, the visual information you receive tells you you've gone right. Some recalibration is in order, and over the course of many reaches you gradually adapt your movements to match the two percepts up.

There are a couple of stages in motor adaptation. The first stage is very cognitive, when you realise something's wrong and you rapidly change your reaches to reduce the perceived error in your movement. The second stage is much less consciously directed, and involves learning to control your arm with the new signals you are receiving from vision and proprioception. When the prism goggles are removed, you experience what is known as a motor aftereffect: you will now be reaching leftwards, the opposite of what appeared to happen when you were originally given the prisms. Over the course of a few trials this aftereffect will decay as the brain shifts back to the old relationship between vision and proprioception.

All this is very interesting (to me at least!) but what does it have to do with music? Well, today's paper by Otmar Bock looks more closely at how the the Mozart effect affects motor systems by studying the influence of music on motor adaptation. The theory goes that if an increased mood can improve cognitive performance, then the first phase of motor adaptation should be facilitated. However, since motor aftereffects are not a conscious cognitive strategy but an unconscious motor recalibration they should not be affected by the change in mood.

To test this idea, Bock split the participants into three groups and played each group either serene, neutral* or sad music at the beginning of and throughout the experiment. Before listening to the music, after listening for a while and at the end of the study, participants indicated their mood by marking a sheet of paper. While listening to the music, the performed a motor adaptation task: they had to move a cursor to an on-screen target while the visual feedback of the cursor was rotated by 60º. They couldn't see their hand while they did this, so their visual and proprioceptive signals gave different information.

As expected, the music participants listened to affected their mood: the 'sad' group reported a lower emotional valence, i.e. more negative emotions, then the 'neutral' group, which reported a lower emotional valence than the 'serene' group. During the task, as generally happens during these adaptation tasks where the goal is visual (and of course vision is more reliable!), participants adapted their movements so as to reduce the visual error. The figure below (Figure 2 in the paper) shows this process for the three separate groups, where light grey shows the 'serene' group, mid grey shows the 'neutral' group and dark grey shows the 'sad' group:


Adaptation error by group

The first three episodes in the figure show the reaching error during normal unrotated trials (the baseline phase), then from episode 4 onwards the cursor is rotated, sending the error up high (the adaptation phase). The error then decreases for all three groups until episode 29, where the rotation is removed again - and now the error is reversed as participants reach the wrong way (the aftereffect phase). What's cool about this figure is that it shows no differences at all for the 'neutral' and 'sad' groups but there is an obvious difference in the 'serene' group: adaptation is faster for this group than the others. Also, when the rotation is removed, the aftereffects show no differences between the three groups.

So it does seem that being in a state of high emotional valence (a good mood) can improve performance on the cognitive stage of motor adaptation - and it seems that 'serene' music can get you there. And interestingly, mood appears to have no effect on the less cognitive aftereffect stage (though see below for my comments on this).

The two main, connected questions I have about these results from a neuroscience point of view are: 1. how does music affect mood? and 2. how does mood affect cognitive performance? A discussion of how music affects the brain is beyond the scope of this post (and my current understanding) but since the brain is a collection of neurons firing together in synchronous patterns it makes sense that this firing can be regulated by coordinated sensory input like music. Perhaps serene music makes the patterns fire more efficiently, and sad music depresses the coordination somewhat. I'm not sure, but if the answer is something like this then I'd like to know more.

There are still a couple of issues with the study though. Here are the data on emotional valence (Figure 1A in the paper):


Emotional valence by group at three different stages

What you can see here is that the emotional valence was the same before (baseline) and after (final) the study, and it's only after listening to the music for a while (initial) that the changes in mood are apparent. Does this mean then that as participants continued with the task that their mood levelled out, perhaps as they concentrated on the task more, regardless of the background music? Could this be the reason for the lack of difference in the aftereffect phase? After all, when a perturbation is removed participants will quickly notice something has changed and I would have thought that the cognitive processes would swing into gear again, like in the beginning of the adaptation phase.

Also, it's worth noting from the above figure that valence is not actually improved by serene music, but appears to decrease for neutral and sad music. So perhaps it is not that serene music makes us better at adapting, but that neutral/sad music makes us worse? There are more questions than answers in these data I feel.

Hmm. This was meant to be a shorter post than the previous one, but I'm not sure it is! Need to work on being concise, I feel...

*I'm not exactly sure what the neutral sound effect was as there's no link, but Bock states in the paper that it is "movie trailer sound 'coffeeshop' from the digital collection Designer Sound FX®"

---

Bock, O. (2010). Sensorimotor adaptation is influenced by background music Experimental Brain Research, 203 (4), 737-741 DOI: 10.1007/s00221-010-2289-0

Images copyright © 2010 Springer-Verlag

Friday, 4 June 2010

Visual dominance is an unreliable hypothesis

ResearchBlogging.orgHow do we integrate our disparate senses into a coherent view of the world? We obtain information from many different sensory modalities simultaneously - sight, hearing, touch, etc. - and we use these cues to form a percept of the world around us. But what isn't well known yet is exactly how the brain accomplishes this non-trivial task.

For example, what happens if the information from two senses give differing results? How do you adapt and calibrate your senses so that the information you get from one (say, the visual slant of a surface) matches up with the other (the feeling of the surface slant)? In this paper, the investigators set out to answer this question by examining something called the visual dominance hypothesis.

The basic idea is that since we are so over-reliant on vision, it will take priority whenever something else conflicts with it. That is, if you get visual information alongside tactile (touch) information, you will tend to adapt your tactile sense rather than your vision to make the two match up. But here the authors present data and argue in favour of a different hypothesis: reliability-based adaptation, where the sensory modality with the lowest reliability will adapt the most. Thus in low-visibility situations, you become more reliant on touch, and vice-versa.

Two experiments are described in this paper: a cue-combination experiment and a cue-calibration experiment. The combination experiment measured the reliability of the sensory estimators, i.e. vision and touch. The calibration experiment was designed using the estimates from the combination study to test whether the visual dominance or reliability hypotheses best explained how the sensory system adapts.

In the combination experiment, participants had to reach out and touch a virtual slanting block in front of them, and then say whether they thought it was slanted towards or away from them. They received either visual or haptic feedback or both (i.e. they could see or touch the object, or both). The cool thing about the setup is that the amount of visual reliability could be varied independently of the amount of haptic reliability, which enabled the experimenters to find a decent visual-haptic reliability ratio for each participant for use in the calibration experiment. They settled on parameters that set the reliability of visual-haptic at 3:1 and 1:3, so either vision was three times as reliable as touch or the other way round.

Following this they tested their participants in the calibration study, which involved changing the discrepancy between the visual and haptic slants over a series of trials, using either high (3:1) or low (1:3) visual reliability. You can see the results in the figure below (Figure 4A in the paper):


Reliability-based vs. visual-dominance hypothesis

The magenta circles show the adaptation in the 3:1 case, while the purple squares show adaptation in the 1:3 case. The magenta and purple dotted lines show the prediction given in the reliability-based adaptation hypothesis (i.e. that the least reliable estimator will adapt), while the black dotted line shows the prediction given in the visual dominance hypothesis (i.e. that vision will never adapt). It's a nice demonstration that seems to show robust support for reliability-based adaptation and that the visual dominance hypothesis isn't supported by the data.

For me, it's actually not too surprising to read this result. There have been several papers that have showed reliability-based adaptation in vision and in other modalities, but the authors do a successful job in showing why their paper is different: partly because purely sensory responses are used to avoid contamination with motor adaptation, and partly because this is the first time that reliabilities have been explicitly measured and used to investigate sensory recalibration.

One thing I wonder about though is the variability in the graph above. For the 3:1 ratio (high visual reliability) the variability of responses is much lower than for the 1:3 ratio (low visual reliability). Since the entire point of the combination experiment was to determine the relative reliabilities of the different modalities for the calibration experiment, I would have expected the variability to be the same in both cases. As it is it looks a bit like vision is inherently more reliable than touch, even when the differences in reliability are supposedly taken into account. Maybe I'm wrong about this though, in which case I'd appreciate someone putting me right!

The authors also model the recalibration process but I'm not going to go into that in detail; suffice it to say that they found the reliability-based prediction is very good indeed as long as the estimators don't drift too much with respect to the measurement noise (i.e. the reliability of the estimator). If the drift is very large, the prediction tends to follow the drift instead of the reliability. I think a nice empirical follow-up would be to do a similar study that takes drift into account - proprioceptive drift is a well-known phenomenon that occurs, for example, when you don't move your hand for a while and your perception of the location of it thus 'drifts' over time.

Anyway, generally speaking this is a cool paper and I quite enjoyed reading it. That's my first of three posts this week - I'll have another one up in a day or two. I know this one was a bit long, and I'll try to make subsequent posts a bit shorter! Questions, comments etc. are very welcome, especially on topics like readability. I want this blog to look at the science in depth but also to be fairly accessible to the interested lay audience. That way I can improve my writing and communication skills while also keeping up with the literature. Win-win.

---

Burge, J., Girshick, A., & Banks, M. (2010). Visual-Haptic Adaptation Is Determined by Relative Reliability Journal of Neuroscience, 30 (22), 7714-7721 DOI: 10.1523/JNEUROSCI.6427-09.2010

Image copyright © 2010 by the Society for Neuroscience