Perceptual binding

Our sensory systems are constantly inundated with information from the outside world. The human nervous system has evolved to be able to take that sensory information and convert it into neural representations that are then used in our every day cognitive functions and interactions with the world. One critical step in these processes is referred to as perceptual binding,  or the process of merging individual bits of sensory information into coherent representations.

For example, when you look at a friend’s face, the information that arrives and is processed at early visual areas of your brain include the individual contours, contrasts, and colours. But this is not what you perceive – instead, you simply perceive her face. To arrive at this unified perception of a face, your brain has merged individual contrasts and contours into facial features such as the eyes, nose, and mouth, and subsequently combined these features into a single, perceptually bound percept of your friend’s face.

A similar process of perceptual binding occurs during audition. When your friend speaks, your brain automatically perceptually binds the individual sounds that she utters into coherent, meaningful segments that you perceive as words. In fact, this process of perceptual binding occurs across our senses as well. Following this example, when you see and hear your friend talking to you, your brain automatically puts together her mouth movements with they sounds that are coming out of her mouth, and you simply perceive your friend talking.

While this is an effortless process for you, the underlying neural computations that are needed to actually perform this task are quite complex. Furthermore, your brain must determine what to perceptually bind and what to not  perceptually bind. In the middle of a noisy, crowded room, how do your sensory systems know to integrate your friends voice with her mouth movements as opposed to the mouth movements of the man that is talking next to her?

Our research aims to study how the brain takes sensory inputs and puts them together into a coherent, meaningful perception of the world that can then be used to successfully interact with that world. This line of research includes a number of questions that we are attempting to answer including, but not limited to:

  1. What sensory information is used to determine which inputs are perceptually bound and which are not?
  2. What are the underlying neural mechanisms that support these processes?

For peer-reviewed publications on this line of work, see:

Stevenson, R. A., Sarko, D. K., Nidiffer, A. R., Ghose, D., Krueger Fister, J., Altieri, N. A., Siemann, J. K., James, T. W., & Wallace, M. T. (2014). Identifying and quantifying multisensory integration: A tutorial review. Brain Topography, 27(6), 707-730.

Stevenson, R. A., Wallace, M. T., & Altieri, N. A. (2014). The interaction between stimulus factors and cognitive factors during multisensory integration of audiovisual speech. Frontiers in Psychology, 5(352), 1-3.

Stevenson, R. A. & Wallace, M.T. (2013). Multisensory temporal integration: Stimulus and task dependencies. Experimental Brain Research, 227(2): 249-261.

Stevenson, R. A., Zemtsov, R. K., & Wallace, M.T. (2012). Individual differences in the multisensory temporal binding window predict susceptibility to audiovisual illusions. Journal of Experimental Psychology: Human Perception and Performance, 28(6): 1517-1529.

Stevenson, R. A., Krueger Fister, J., Barnett, Z. P., Nidiffer, A. R., & Wallace, M.T. (2012). Interactions between the spatial and temporal stimulus factors that influence multisensory integration in human performance. Experimental Brain Research, 219(1): 121-137.

Stevenson, R. A., Bushmakin, M., Kim, S., Puce, A., & James, T. W. (2012). Stimulus effectiveness and multisensory interactions in audiovisual speech event-related potentials. Brain Topography, 25 (3), 308-326.

Kim, S., Stevenson, R. A., & James, T. W. (2012). Visuo-haptic neuronal convergence demonstrated by an inversely effective pattern of BOLD activation. Journal of Cognitive Neuroscience, 24 (4), 830-842

James, T. W. & Stevenson, R. A., Kim, S. (2012). Inverse effectiveness in multisensory processing. In The New Handbook of Multisensory Processes. Ed. Stein, B. MIT Press, Cambridge, MA. Chapter 12, 207-222.

James, T. W. & Stevenson, R. A. (2012). The use of fMRI to assess multisensory integration. In Frontiers in the Neural Bases of Multisensory Processes. Eds. Wallace, M. & Murray, M. Taylor & Francis Group, London. Chapter 8, 131-146.

Stevenson, R. A., VanDerKlok, R. M., Pisoni, D. B., & James, T. W. (2011). Discrete neural substrates underlie complementary audiovisual speech integration processes. Neuroimage, 55: 1339-1345.

James, T. W., VanDerKlok, R. M., Stevenson, R. A., & James, K. H. (2011). Multisensory perception of action in posterior temporal cortex. Neuropsychologia, 49(1): 108-114.

Stevenson, R. A., Altieri, N. A., Kim, S., James, T. W. (2010). Neural processing of asynchronous audiovisual speech perception. Neuroimage, 49: 3308-3318.

Stevenson, R. A., Kim, S., & James, T. W. (2009). An additive-factors design to disambiguate neuronal and areal convergence: Measuring multisensory interactions between audio, visual, and haptic sensory streams using fMRI.  Experimental Brain Research, 198:183-194.

Stevenson, R. A. & James, T. W. (2009). Audiovisual integration in the human superior temporal sulcus: Inverse effectiveness and the neural processing of speech and object recognition. Neuroimage, 44:1210-1223.

Stevenson, R. A., Geoghegan, M. L., & James, T. W. (2007). Superadditive BOLD activation in superior temporal sulcus with threshold non-speech objects. Experimental Brain Research, 179:85-95.

Leave a comment