Showing posts with label sensory. Show all posts
Showing posts with label sensory. Show all posts

Wednesday, 21 July 2010

Lazy beats sloppy

ResearchBlogging.orgToday I give in to my inner lazy person (who is, in fact, quite similar to my outer lazy person) and talk about a paper after I’ve just been to a journal club, rather than before. The advantages are that I was reading the paper anyway and I’ve just had an hour of discussion about it so I don’t actually have to think of things to say about it myself. The disadvantages are that, um, it’s lazy? And that’s bad? Perhaps. But I still think it’s better, as we shall see, than sloppy.

The premise of the paper harks back to my earlier post on visual dominance and multisensory integration. It’s been well known in the literature for a while that if you flash a couple of lights while at the same time playing auditory beeps, an interesting little illusion occurs. If participants are asked to count the number of flashes, and they’re the same as the number of beeps, then they almost always get the answer right. But if there are two flashes and one beep, or one flash and two beeps, then they’re much more likely to say there was one or two flashes respectively. The figure below (Figure 1 in the paper) illustrates this:

Illusion when the hand is at rest

In the figure, you can see that the bar for one beep and one flash (far left black bar) and two beeps and two flashes (far right white bar) are at heights 1 and 2 respectively, which illustrates the number of perceived flashes. That is, the number of perceived flashes is just what you’d expect – one for one flash, two for two flashes. However the middle bars, which show the one beep/two flash and two beep/one flash conditions, are at intermediate heights, showing the presence of the illusion. This figure actually demonstrates the first problem with the paper, which is that the figures are pretty difficult to interpret. I know I wasn’t alone in the lab at finding them confusing anyway.

What the authors were interested in is whether a goal-directed movement could alter visual processing, and they used the illusion to probe this. Participants had to make point-to-point reaches from a start point to a target. During the reach their susceptibility to the illusion was tested at the target point – but the test began a variable time away from the start of the movement, between 0 and 250 ms. That is: sometimes the flashes and beeps occurred at the start of the movement when the arm was moving slowly, sometimes when it was half way through and thus moving faster, and sometimes at the end when it was moving slowly again.

The experimenters found that, when there were two flashes and one beep, participants were less likely to see an illusion during the middle part of their movement than during the beginning and end. That is, they were more likely to get it right when they were moving faster. The trouble starts when you look a bit closer at the effect they’ve got – it’s pretty weak. There seems to be a lot of noise in the data, and the impression that they’re grasping at straws a little isn’t helped by the aforementioned sloppy figures.

Having said that, the stats do hold up. What might be the explanation for this kind of effect? The multisensory integration argument is that the sensory modality (e.g. vision) with the least noise should be the one that is prioritized. So when the arm is moving quickly, there’s more noise in the motor system compared with the visual system and thus you’re better at determining how many flashes there are. I’m not sure I buy this; the illusion is about the visual and auditory systems, after all. I’m not sure I get why you’d be better at detecting an illusion when you’re moving than when you’re not moving, for example. The authors claim that the limb movement “requires extensive use of visual information” but again I’m not so sure. When we reach for objects we generally take note of where our arm is, look at the object and then move the arm to the object without looking at it again.

So, a weak effect that isn’t well explained. That wouldn’t be so bad, but the clarity of the paper is also lacking. There’s also the question of why, if they had such a weak effect, they didn’t do another experiment or two to tease out what was really going on. I do think the slightly larger problem here is the review process at PLoS. It’s open access so anyone can read it free online, which I am very much in favour of, but it’s biased towards only reviewing the methods and results of a paper rather than the introduction/discussion. I go back and forth over whether this is a good thing. Some journals reject papers based on novelty (a.k.a. coolness) whereas it appears that PLoS strives to accept well-performed science regardless of how ‘interesting’ (and I use the term in quotes advisedly) the result is.

In this case I think that, while the science is good, it would be a much better paper if it went a bit more into depth with a couple of extra experiments exploring these effects more carefully – and if it had figures that were perhaps a bit easier to comprehend.

--

Tremblay L, & Nguyen T (2010). Real-time decreased sensitivity to an audio-visual illusion during goal-directed reaching. PloS one, 5 (1) PMID: 20126451

Image copyright © 2010 Tremblay & Nguyen

Monday, 19 July 2010

Far out is not as far out as you think

ResearchBlogging.orgProprioception is the sense of where your body is in space. It is one of several ways the brain uses sensory information to figure out where your limbs and the rest of you are, along with vision and the semicircular ear canals of the vestibular system (though these are more important in balance). Proprioception is defined as information from the lengths of muscles, the location of joints and receptors in the skin that tell us how much we have stretched it.

How, if at all, does the accuracy and precision of this information vary across different tasks and limb configurations? To test this, the authors of today’s study got their participants to perform three experimental tasks that involved matching perceived limb position without being able to see their arm. In the first task, participants used a joystick to rotate a virtual line on a screen positioned over their limb until they decided that it was in the same direction as their forearm. In the second task, they used a joystick to move a dot around until they decided that it was over their index finger. In the third task, they again saw a virtual line on the screen, but this time they had to actively move their forearm until they decided they were in line with it.

The results were kind of interesting: in all three cases, participants tended to overestimate the position of their limbs when they were at extremes; i.e. when they were more flexed they assumed they were even more flexed, and when they were more extended they assumed they were even more extended. This is quite confusing to explain, but the figure below (Figure 4A in the paper) should help:

Estimates of arm position from one participant

The black lines are the actual position of the arm of a representative participant in task 1, with flexion on the left and extension on the right. Blue lines are the participant’s estimates of arm position, and the red line is the average of the estimates. You can see that when the arm is flexed the participant guesses that it’s more flexed than it actually is, with the corresponding result for when the arm is extended. The researchers found no differences in accuracy between the three tasks, but they did find differences in precision – participants were much more precise, i.e. the spread of their responses was lower, in the passive fingertip task and the active elbow movement task (tasks 2 and 3).

So what? Well, these results give us an insight into how proprioception works. The authors argue that the bias towards thinking you’re more flexed/extended than you really are comes from the overactivity of joint and skin receptors as the limb reaches its extreme positions. Why might these receptors become overactive at extreme positions? Possibly because it allows us to sense ahead of time when we’re getting to a point of movement that is mechanically impossible for the limb to perform, either because we’re trying to flex it too much or we’re trying to straighten it too much. Push too hard at either extreme – muscles are quite strong – and you could damage the limb. Better for the system to make you stop pushing earlier by giving you a signal that you’re further along than you thought. I think it’s a nice hypothesis.

I quite like this study, as it’s another one of those not-wildly-exciting-but-useful-to-know kinds of papers. While the wildly exciting stuff is great, I think that too often the worthy, low-key stuff like this is unfairly overshadowed. Science is about huge leaps and paradigm shifts much less than it’s about the slow grind of data making possible incremental progress on various questions. And I’m not just saying that because that’s what all my papers are like!

---

Fuentes, C., & Bastian, A. (2009). Where Is Your Arm? Variations in Proprioception Across Space and Tasks Journal of Neurophysiology, 103 (1), 164-171 DOI: 10.1152/jn.00494.2009

Image copyright © 2010 The American Physiological Society

Thursday, 8 July 2010

Motor learning changes where you think you are

ResearchBlogging.orgI’ve covered both sensory and motor learning topics on this blog so far, and here’s one that very much mashes the two together. In earlier posts I have written about how we form a percept of the world around us, and about our sense of ownership of our limbs. In today’s paper the authors investigate the effect of learning a motor task on sensory perception itself.

They performed a couple of experiments, in slightly different ways, which essentially showed the same result – so I’ll just talk about the first one here. Participants had to make point-to-point reaches while holding a robotic device in three phases (null, force field and aftereffect) separated by perceptual tests designed to assess where they felt their arm to be. The figure below (Figure 1A in the paper) shows the protocol and the reaching error results:

Motor learning across trials

In the null phase, as usual, participants reached without being exposed to a perturbation. In the force field phase, the robot pushed their arm to the right or to the left (blue or red dots respectively), and you can see from the graph that they made highly curved movements to begin with and then learnt to correct them. In the aftereffect phase, the force was removed, but you can still see the motor aftereffects from the graph. So motor learning definitely took place.

But what about the perceptual tests? It turns out that participants’ estimation of where their arm was changed after learning the motor task. In the figure below (Figure 2B and 2C in the paper) you can see in the left graph that after the force field (FF) trials, hand perception shifted in the opposite direction to the force direction. [EDIT: actually it's in the same direction; see the comments section!] This effect persisted even after the aftereffects (AE) block.


Perceptual shifts as learning occurs

What I think is even more interesting is the graph on the right. It shows not only the right and left (blue and red) hand perceptions, but also the hand perception after 24 hours (yellow) – and, crucially, the hand perception when participants didn’t make the movements themselves but allowed the robot to move them (grey). As you can see, there’s no perceptual shift. It only appears to happen when participants make active movements through the force field, which means that the change in sensory perception is closely linked to learning a motor task.

In some ways this isn’t too surprising, to me at least. In some of my work with Adrian Haith (happily cited by the authors!), we developed and tested a model of motor learning that requires changes to both sensory and motor systems, and showed that force field learning causes perceptual shifts in locating both visual and proprioceptive targets; you can read it free online here. The work in this paper seems to shore up our thesis that the motor system takes into account both motor and sensory errors during learning.

Some of the work I’m dabbling with at the moment involves neuronal network models of motor learning and optimization. This kind of paper, showing the need for changes in sensory perception during motor learning, throws a bit of a cog into the wheels of some of that. As it stands the models tend to assume sensory input as static and merely change motor output as learning progresses. Perhaps we need to think a bit more carefully about that.

---

Ostry DJ, Darainy M, Mattar AA, Wong J, & Gribble PL (2010). Somatosensory plasticity and motor learning. The Journal of Neuroscience, 30 (15), 5384-93 PMID: 20392960

Images copyright © 2010 Ostry, Darainy, Mattar, Wong & Gribble

Friday, 25 June 2010

You're only allowed one left hand

ResearchBlogging.orgIn previous posts I’ve asked how we know where our hands are and how we combine information from our senses. Today’s paper covers both of these topics, and investigates the deeper question of how we incorporate this information into our representation of the body.

Body representation essentially splits into two parts: body image and body schema. Body image is how we think about our body, how we see ourselves; disorders in body image can lead to anorexia or myriad other problems. Body schema, on the other hand, is how our brain keeps track of the body, below the conscious level, so that when we reach for a glass of water we know where we are and how far to go. There’s some fascinating work on body ownership and embodiment but you can read about that in the paper, as it’s open access!

The study is based on a manipulation of the rubber hand illusion, a very cool perceptual trick that’s simple to perform. First, find a rubber hand (newspaper inside a rubber glove works well). Second, get a toothbrush, paintbrush, or anything else that can be used to produce a stroking sensation. Third, sit your experimental participant down and stroke a finger on the rubber hand while simultaneously stroking the equivalent finger on the participant’s actual hand (make sure they can’t see it!). These strokes MUST be synchronous, i.e. applied with the same rhythm. The result, after a little while, is that the participant starts to fell like the rubber hand is actually their hand! It’s a really fun effect.

There are of course limitations of the rubber hand illusion – a fake static hand isn’t the best thing for eliciting illusions of body representation, as it’s obviously fake, no matter how much you think the hand is yours. Plus it’s hard to do movement studies with static hands. The researchers got around this problem by using a camera/projection system to record an image of their participant’s hand and playing it back in real time. They got their participants to actively stroke a toothbrush rather than having the stroking passively applied to them, and then showed two images of their hand to the left and right of the actual (unseen) hand position.

The left, right or both hands were shown synchronously stroking; the other hand in the first two conditions was shown asynchronously stroking by delaying the feedback from the camera. The researchers asked through questionnaires whether participants felt they ‘owned’ each hand. You can see these results in the figure below (Figure 3B in the paper):

Ownership rating by hand stroke condition

For the left-stroke (LS) and right-stroke (RS) conditions, only the left or right image respectively was felt to be ‘owned’ whereas in the both-stroke (BS) condition, both hands were felt to be ‘owned’. This result isn’t too surprising; it’s a nice strong replication of the rubber hand results other researchers have found. Where it gets interesting is that when participants were asked to make reaches to a target in front of them they tended to reach in the right-stroke and left-stroke conditions as if the image of the hand they felt they ‘owned’ was actually theirs. That is, they made pointing errors consistent with what you would see if their real hand had been in the location of the image.

In a final test, participants in the both-stroke condition were asked to reach to a target in the presence of distractors to its left and right. Usually people will attempt to avoid distractors, even when it’s just an image or a dot that they are moving around a screen, and the distractors are just lights. However in this case participants had no qualms about moving one of the images through the distractors to reach the target with the other, even though they claimed ‘ownership’ of both.

This last point leads to an interesting idea the authors explore in the discussion section. While it seems to be possible to incorporate two hands simultaneously into the body image, this doesn’t appear to translate to the body schema. So you might be able to imagine yourself with extra limbs, but when it comes to actively move them the motor system seems to pick one and go with that, ignoring the other one (even when it hits an obstacle).

To my mind this is probably a consequence of the brain learning over many years how many limbs it has and how to move them efficiently, and any extra limbs it may appear to have at the moment can be effectively discounted. It is interesting to see how quickly the schema can adapt to apparent changes in a single limb however, as shown by the pointing errors in the RS and LS movement tasks.

I wonder if we were born with more limbs, would we learn gradually how to control them all over time? After all, octopuses manage it. Would we still see a hand dominance effect? (I’m not sure if octopuses show arm dominance!) And would we, when a limb was lost in an accident, still experience the ‘phantoms’ that amputees report? I haven’t touched on phantoms this post, but I’m sure I’ll return to them at some point.

Altogether a simple but interesting piece of work, which raises lots of interesting questions, like good science should. (Disclaimer: I know the first and third authors of this study from my time in Nottingham. That wouldn't stop me saying their work was rubbish if it was though!)

---

Newport, R., Pearce, R., & Preston, C. (2009). Fake hands in action: embodiment and control of supernumerary limbs Experimental Brain Research DOI: 10.1007/s00221-009-2104-y

Image copyright © 2009 Newport, Pearce & Preston

Wednesday, 16 June 2010

Where you look affects your judgement

ResearchBlogging.orgOur ability to successfully interact with the environment is key to our survival. Much of my work involves figuring out how the brain sends the correct commands to the upper limb that allow us to control it and reach for objects around us. Considering how complex the musculature of the arm is, and how ever-changing the world is around us, this is a non-trivial task. One fundamental question that needs to be solved by the brain’s control system is: How do you know where something is relative to your hand?

It’s no good sending a complex set of commands to reach for an object if you don’t know how to relate where your hand is right now to where the object is. There are several theories as to how the brain might perform this task. In one theory, the object’s location on the retina is translated into body-centred coordinates (i.e. where it is in location to the body centre) by adding the eye position and the head position sequentially. In another, the object is stored in a gaze-centred reference frame that has to be recalculated after every movement.

There’s already some evidence for the second account – we tend to overestimate how far in our peripheral vision a target sits, and so we actually make pointing errors when asked to reach to where we thought it was. So it seems as if we dynamically update our estimate of where a target is when we are asked to make active movements towards them. In this paper the researchers were interested in whether this was also true for perceptual estimates. That is, when you are simply asked to state the position of a remembered target, does that also depend on gaze shift?

To answer this question, the authors performed an experiment with two different kinds of targets: visual and proprioceptive. (If you’ve been paying attention, you’ll know that proprioception is the sense of where your body is in space.) The visual target was just an LED set out in front of the participant; the proprioceptive target was the participant’s own unseen hand moved through space by a robot. Before the target appeared, participants were asked to look at an LED either straight in front of them, or 15˚ to the left or right. The targets would then appear (or the hand would be moved to the target location), disappear (or the hand would be moved back), and then the participant’s hand would be moved out again to a comparison location. They then had to judge whether their current hand location was to the left or right of the remembered target.

Here’s where it gets interesting. Participants were placed into one of two conditions: static or dynamic. In the static condition, participants kept their gaze fixed on an LED to the left, to the right or straight ahead of their body midline. In the dynamic condition, they gazed straight ahead and were asked to move their eyes to the left or right LED after the target had disappeared. In a gaze-dependent system, this should introduce errors as the target location relative to the hand would be updated relative to gaze after the eye movement. In a gaze-independent system, no errors should be evident as the target position was already calculated before the eye movement.

Bias in judgements of visual and proprioceptive targets

The figure above (Figure 4a and 4b in the paper) shows the basic results. Grey is the right fixation while black is the left fixation; circles show the static condition while crosses show the dynamic condition. You can immediately see that in both conditions, for both targets, participants made estimation errors in the opposite direction to their gaze: errors to the left for right gaze, and errors to the right for left gaze. So it does look like perceptual judgements are coded and updated in a gaze-centred reference frame. To hammer home their point, the next figure (Figure 5 in the paper) shows the similarity between the judgements in the static and dynamic conditions:

Static vs. dynamic bias

As you can see, the individual judgements match up very closely indeed, which gives even more weight to the gaze-centred account.

So what does this mean? Well: it means that whenever you move your eyes, whether you are planning an action or not, your brain’s estimation of where objects are in space relative to your limbs is remapped. The reason that the errors this generates don’t affect your everyday life is that usually when you want to reach for an object you will look directly at it anyway, which eliminates the problems of estimating the position of objects on the periphery of your vision.

I enjoyed reading this paper – and there is much more in there about how the findings relate to other work in the literature – but it was a bit wordy and hard to get through at times. One of the most difficult things about writing, I’ve found, is to try and maintain the balance between being concise and containing enough information so that the result isn’t distorted. Time will tell how I manage that on this blog!

---

Fiehler, K., Rösler, F., & Henriques, D. (2010). Interaction between gaze and visual and proprioceptive position judgements Experimental Brain Research, 203 (3), 485-498 DOI: 10.1007/s00221-010-2251-1

Images copyright © 2010 Springer-Verlag

Tuesday, 8 June 2010

Mood, music and movement

ResearchBlogging.orgWe all know that music can have an effect on our mood (or, to use a mildly annoying linguistic contrivance, can affect our affect). And being in a better mood has been consistently shown to improve our performance on cognitive tasks, like verbal reasoning; the influence of serene music on such tasks is also known as the 'Mozart effect'. What's kind of interesting is that this Mozart effect has also been shown to be effective on motor tasks, like complex manual tracking.

In the last post I talked a bit about motor adaptation - recalibrating two sensory sources so that the overall percept matches up with the incoming information. Say you're reaching to a target under distorted vision, like wearing goggles with prisms in them that make it look like you're reaching further to the right than you actually are; this is known as a visual perturbation. When you reach forward, the sense of where you are in space (proprioception) sends signals to the brain telling you your arm's gone forward. However, the visual information you receive tells you you've gone right. Some recalibration is in order, and over the course of many reaches you gradually adapt your movements to match the two percepts up.

There are a couple of stages in motor adaptation. The first stage is very cognitive, when you realise something's wrong and you rapidly change your reaches to reduce the perceived error in your movement. The second stage is much less consciously directed, and involves learning to control your arm with the new signals you are receiving from vision and proprioception. When the prism goggles are removed, you experience what is known as a motor aftereffect: you will now be reaching leftwards, the opposite of what appeared to happen when you were originally given the prisms. Over the course of a few trials this aftereffect will decay as the brain shifts back to the old relationship between vision and proprioception.

All this is very interesting (to me at least!) but what does it have to do with music? Well, today's paper by Otmar Bock looks more closely at how the the Mozart effect affects motor systems by studying the influence of music on motor adaptation. The theory goes that if an increased mood can improve cognitive performance, then the first phase of motor adaptation should be facilitated. However, since motor aftereffects are not a conscious cognitive strategy but an unconscious motor recalibration they should not be affected by the change in mood.

To test this idea, Bock split the participants into three groups and played each group either serene, neutral* or sad music at the beginning of and throughout the experiment. Before listening to the music, after listening for a while and at the end of the study, participants indicated their mood by marking a sheet of paper. While listening to the music, the performed a motor adaptation task: they had to move a cursor to an on-screen target while the visual feedback of the cursor was rotated by 60º. They couldn't see their hand while they did this, so their visual and proprioceptive signals gave different information.

As expected, the music participants listened to affected their mood: the 'sad' group reported a lower emotional valence, i.e. more negative emotions, then the 'neutral' group, which reported a lower emotional valence than the 'serene' group. During the task, as generally happens during these adaptation tasks where the goal is visual (and of course vision is more reliable!), participants adapted their movements so as to reduce the visual error. The figure below (Figure 2 in the paper) shows this process for the three separate groups, where light grey shows the 'serene' group, mid grey shows the 'neutral' group and dark grey shows the 'sad' group:


Adaptation error by group

The first three episodes in the figure show the reaching error during normal unrotated trials (the baseline phase), then from episode 4 onwards the cursor is rotated, sending the error up high (the adaptation phase). The error then decreases for all three groups until episode 29, where the rotation is removed again - and now the error is reversed as participants reach the wrong way (the aftereffect phase). What's cool about this figure is that it shows no differences at all for the 'neutral' and 'sad' groups but there is an obvious difference in the 'serene' group: adaptation is faster for this group than the others. Also, when the rotation is removed, the aftereffects show no differences between the three groups.

So it does seem that being in a state of high emotional valence (a good mood) can improve performance on the cognitive stage of motor adaptation - and it seems that 'serene' music can get you there. And interestingly, mood appears to have no effect on the less cognitive aftereffect stage (though see below for my comments on this).

The two main, connected questions I have about these results from a neuroscience point of view are: 1. how does music affect mood? and 2. how does mood affect cognitive performance? A discussion of how music affects the brain is beyond the scope of this post (and my current understanding) but since the brain is a collection of neurons firing together in synchronous patterns it makes sense that this firing can be regulated by coordinated sensory input like music. Perhaps serene music makes the patterns fire more efficiently, and sad music depresses the coordination somewhat. I'm not sure, but if the answer is something like this then I'd like to know more.

There are still a couple of issues with the study though. Here are the data on emotional valence (Figure 1A in the paper):


Emotional valence by group at three different stages

What you can see here is that the emotional valence was the same before (baseline) and after (final) the study, and it's only after listening to the music for a while (initial) that the changes in mood are apparent. Does this mean then that as participants continued with the task that their mood levelled out, perhaps as they concentrated on the task more, regardless of the background music? Could this be the reason for the lack of difference in the aftereffect phase? After all, when a perturbation is removed participants will quickly notice something has changed and I would have thought that the cognitive processes would swing into gear again, like in the beginning of the adaptation phase.

Also, it's worth noting from the above figure that valence is not actually improved by serene music, but appears to decrease for neutral and sad music. So perhaps it is not that serene music makes us better at adapting, but that neutral/sad music makes us worse? There are more questions than answers in these data I feel.

Hmm. This was meant to be a shorter post than the previous one, but I'm not sure it is! Need to work on being concise, I feel...

*I'm not exactly sure what the neutral sound effect was as there's no link, but Bock states in the paper that it is "movie trailer sound 'coffeeshop' from the digital collection Designer Sound FX®"

---

Bock, O. (2010). Sensorimotor adaptation is influenced by background music Experimental Brain Research, 203 (4), 737-741 DOI: 10.1007/s00221-010-2289-0

Images copyright © 2010 Springer-Verlag

Friday, 4 June 2010

Visual dominance is an unreliable hypothesis

ResearchBlogging.orgHow do we integrate our disparate senses into a coherent view of the world? We obtain information from many different sensory modalities simultaneously - sight, hearing, touch, etc. - and we use these cues to form a percept of the world around us. But what isn't well known yet is exactly how the brain accomplishes this non-trivial task.

For example, what happens if the information from two senses give differing results? How do you adapt and calibrate your senses so that the information you get from one (say, the visual slant of a surface) matches up with the other (the feeling of the surface slant)? In this paper, the investigators set out to answer this question by examining something called the visual dominance hypothesis.

The basic idea is that since we are so over-reliant on vision, it will take priority whenever something else conflicts with it. That is, if you get visual information alongside tactile (touch) information, you will tend to adapt your tactile sense rather than your vision to make the two match up. But here the authors present data and argue in favour of a different hypothesis: reliability-based adaptation, where the sensory modality with the lowest reliability will adapt the most. Thus in low-visibility situations, you become more reliant on touch, and vice-versa.

Two experiments are described in this paper: a cue-combination experiment and a cue-calibration experiment. The combination experiment measured the reliability of the sensory estimators, i.e. vision and touch. The calibration experiment was designed using the estimates from the combination study to test whether the visual dominance or reliability hypotheses best explained how the sensory system adapts.

In the combination experiment, participants had to reach out and touch a virtual slanting block in front of them, and then say whether they thought it was slanted towards or away from them. They received either visual or haptic feedback or both (i.e. they could see or touch the object, or both). The cool thing about the setup is that the amount of visual reliability could be varied independently of the amount of haptic reliability, which enabled the experimenters to find a decent visual-haptic reliability ratio for each participant for use in the calibration experiment. They settled on parameters that set the reliability of visual-haptic at 3:1 and 1:3, so either vision was three times as reliable as touch or the other way round.

Following this they tested their participants in the calibration study, which involved changing the discrepancy between the visual and haptic slants over a series of trials, using either high (3:1) or low (1:3) visual reliability. You can see the results in the figure below (Figure 4A in the paper):


Reliability-based vs. visual-dominance hypothesis

The magenta circles show the adaptation in the 3:1 case, while the purple squares show adaptation in the 1:3 case. The magenta and purple dotted lines show the prediction given in the reliability-based adaptation hypothesis (i.e. that the least reliable estimator will adapt), while the black dotted line shows the prediction given in the visual dominance hypothesis (i.e. that vision will never adapt). It's a nice demonstration that seems to show robust support for reliability-based adaptation and that the visual dominance hypothesis isn't supported by the data.

For me, it's actually not too surprising to read this result. There have been several papers that have showed reliability-based adaptation in vision and in other modalities, but the authors do a successful job in showing why their paper is different: partly because purely sensory responses are used to avoid contamination with motor adaptation, and partly because this is the first time that reliabilities have been explicitly measured and used to investigate sensory recalibration.

One thing I wonder about though is the variability in the graph above. For the 3:1 ratio (high visual reliability) the variability of responses is much lower than for the 1:3 ratio (low visual reliability). Since the entire point of the combination experiment was to determine the relative reliabilities of the different modalities for the calibration experiment, I would have expected the variability to be the same in both cases. As it is it looks a bit like vision is inherently more reliable than touch, even when the differences in reliability are supposedly taken into account. Maybe I'm wrong about this though, in which case I'd appreciate someone putting me right!

The authors also model the recalibration process but I'm not going to go into that in detail; suffice it to say that they found the reliability-based prediction is very good indeed as long as the estimators don't drift too much with respect to the measurement noise (i.e. the reliability of the estimator). If the drift is very large, the prediction tends to follow the drift instead of the reliability. I think a nice empirical follow-up would be to do a similar study that takes drift into account - proprioceptive drift is a well-known phenomenon that occurs, for example, when you don't move your hand for a while and your perception of the location of it thus 'drifts' over time.

Anyway, generally speaking this is a cool paper and I quite enjoyed reading it. That's my first of three posts this week - I'll have another one up in a day or two. I know this one was a bit long, and I'll try to make subsequent posts a bit shorter! Questions, comments etc. are very welcome, especially on topics like readability. I want this blog to look at the science in depth but also to be fairly accessible to the interested lay audience. That way I can improve my writing and communication skills while also keeping up with the literature. Win-win.

---

Burge, J., Girshick, A., & Banks, M. (2010). Visual-Haptic Adaptation Is Determined by Relative Reliability Journal of Neuroscience, 30 (22), 7714-7721 DOI: 10.1523/JNEUROSCI.6427-09.2010

Image copyright © 2010 by the Society for Neuroscience