Wednesday 23 June 2010

The cost of uncertainty

ResearchBlogging.orgBack from my girlfriend-induced hiatus and onto a really interesting paper published ahead of print in the Journal of Neurophysiology. This work asks some questions, and postulates some answers, very similar to the line of thinking I’ve been going down recently – which is, of course, the main reason I find it interesting! (The other reason is that they used parabolic flights. Very cool.)

One theory of how the brain performs complex movements in a dynamical environment – like, say, lifting objects – is known as optimal feedback control (OFC). The basic idea is that the brain makes movements that are optimized to the task constraints. For example, to lift an object, the control system might want to minimize the amount of energy used* and at the same time lift the object to a particular position. In OFC we combine these constraints into something called a cost function: how much the action ‘costs’ the system to perform. To optimize the movement, the system simply works to reduce the total cost.

But where does the system get information about the limb and the task from in the first place so as to optimize its control? There are two sources for knowledge about limb dynamics. The most obvious is reactive: feedback from the senses, from both vision and proprioception (the sense of where the arm is in space). But feedback takes a while to travel to the brain and so another source is needed: a predictive source of knowledge, an internal model of the task and limb dynamics. The predictive and reactive components can be combined in an optimal fashion to form an estimate of the state of the limb (i.e. where it is and how fast it’s going). This ‘state estimate’ can then be used to calculate the overall cost of the movement.

In today’s paper the authors argue that at the start of a new task, a new internal model has to be learnt, or an old one modified, to deal with the new task demands. So far so uncontroversial. What’s new here is the claim that the cost function being optimized for actually changes when dealing with a new task – because there is higher uncertainty in the internal prediction so the system is temporarily more reliant on feedback. They have some nice data and models to back up their conclusion.

The task was simple: participants had to grip a block and move it up or down from a central position while their position and grip force was recorded. After they’d learnt the task at normal gravity, they had to perform it in microgravity during a parabolic flight, which essentially made their arm and the object weightless. Their grip force increased markedly even though they now had a weightless object, and kinematic (e.g. position, velocity) measures changed too; movements took more time, and the peak acceleration was lower. Over the course of several trials the grip force decreased again as participants learnt the task. You can see some representative kinematic data in the figure below (Figure 4 in the paper):

Kinematic data from a single participant


Panels A-D show the average movement trace of one participant in normal (1 g) and microgravity (0 g) conditions, while panels E and F show the changes in acceleration and movement time respectively. The authors argue that the grip force changes at the beginning of the first few trials point towards uncertainty in the internal prediction, which results in the altered kinematics.

To test this idea, they ran a simulation based on a single-joint model of the limb using OFC and the optimal combination of information from the predictive system and sensory feedback. What they varied in this model was the noise, and thus the reliability, in the predictive system. The idea was that as the prediction became less reliable, the kinematics should change to reflect more dependence on the sensory feedback. But that's not quite what happened, as you can see from the figure below (Figure 8 in the paper):

Data and simulation results


Here the graphs show various kinematic parameters. In black and grey are the mean data points from all the participants for the upward and downward movements. The red squares show the parameters the simulation came up with when noise was injected into the prediction. As you can see, they're pretty far off! So what was the problem? Well, it seems that you need to change not only the uncertainty of the prediction but also the cost function that is being optimized. The blue diamonds show what happens when you manipulate the cost function (by increasing the parameter shown as alpha); suddenly the kinematics are much closer to the way people actually perform.

Thus, the conclusion is that when you have uncertainty in your predictive system, you actually change your cost function while you're learning a new internal model. I find this really interesting because it's a good piece of evidence that uncertainty in the predictive system feeds into the selection of a new cost function for a movement, rather than the motor system just sticking with the old cost function and continuing to bash away.

It's a nice paper but I do wonder, why did the authors go to all the trouble of using parabolic flights to get the data here? If what they're saying is true and any uncertainty in the internal model/predictive system is enough to make you change your cost function, this experiment could have been done much more simply – and for much longer than the 30 trials they were able to do under microgravity – by just using a robotic system. Perhaps they didn't have access to one, but even so it seems a bit of overkill to spend money on parabolic flights which are so limited in duration.

Overall though it's a really fun paper with some interesting and thought-provoking conclusions.

*To be precise there is some evidence that it's not the amount of energy used that gets minimized, but the size of the motor command itself (because a bigger command has more variability due to something called signal-dependent noise... I'm not going to go into that though!).

---

Crevecoeur, F., McIntyre, J., Thonnard, J., & Lefevre, P. (2010). Movement Stability under Uncertain Internal Models of Dynamics Journal of Neurophysiology DOI: 10.1152/jn.00315.2010

Images copyright © 2010 The American Physiological Society

2 comments:

  1. I wondered why they'd gone with parabolic flights too: does that mean I'd be a good scientist?

    ReplyDelete
  2. Critical evaluation of people's work is an important step, yes! It's one I'm trying to do more of, as I often feel I give people a free pass too much sometimes, assuming they know what they're doing.

    Of course, the whole point of science is to occasionally say to people: "Why are you doing that? That's a bad way to do it! Do it THIS way!" and then argue with them for a while. Hopefully that produces more accurate results...

    ReplyDelete