They performed a couple of experiments, in slightly different ways, which essentially showed the same result – so I’ll just talk about the first one here. Participants had to make point-to-point reaches while holding a robotic device in three phases (null, force field and aftereffect) separated by perceptual tests designed to assess where they felt their arm to be. The figure below (Figure 1A in the paper) shows the protocol and the reaching error results:
Motor learning across trials
In the null phase, as usual, participants reached without being exposed to a perturbation. In the force field phase, the robot pushed their arm to the right or to the left (blue or red dots respectively), and you can see from the graph that they made highly curved movements to begin with and then learnt to correct them. In the aftereffect phase, the force was removed, but you can still see the motor aftereffects from the graph. So motor learning definitely took place.
But what about the perceptual tests? It turns out that participants’ estimation of where their arm was changed after learning the motor task. In the figure below (Figure 2B and 2C in the paper) you can see in the left graph that after the force field (FF) trials, hand perception shifted in the opposite direction to the force direction. [EDIT: actually it's in the same direction; see the comments section!] This effect persisted even after the aftereffects (AE) block.
Perceptual shifts as learning occurs
What I think is even more interesting is the graph on the right. It shows not only the right and left (blue and red) hand perceptions, but also the hand perception after 24 hours (yellow) – and, crucially, the hand perception when participants didn’t make the movements themselves but allowed the robot to move them (grey). As you can see, there’s no perceptual shift. It only appears to happen when participants make active movements through the force field, which means that the change in sensory perception is closely linked to learning a motor task.
In some ways this isn’t too surprising, to me at least. In some of my work with Adrian Haith (happily cited by the authors!), we developed and tested a model of motor learning that requires changes to both sensory and motor systems, and showed that force field learning causes perceptual shifts in locating both visual and proprioceptive targets; you can read it free online here. The work in this paper seems to shore up our thesis that the motor system takes into account both motor and sensory errors during learning.
Some of the work I’m dabbling with at the moment involves neuronal network models of motor learning and optimization. This kind of paper, showing the need for changes in sensory perception during motor learning, throws a bit of a cog into the wheels of some of that. As it stands the models tend to assume sensory input as static and merely change motor output as learning progresses. Perhaps we need to think a bit more carefully about that.
Ostry DJ, Darainy M, Mattar AA, Wong J, & Gribble PL (2010). Somatosensory plasticity and motor learning. The Journal of Neuroscience, 30 (15), 5384-93 PMID: 20392960
Images copyright © 2010 Ostry, Darainy, Mattar, Wong & Gribble