TO TOP

Overview

 

The overall aim of our research is to understand how the human brain combines expectations and sensory information. Our ability to successfully communicate with other people is an essential skill in everyday life. Therefore, unravelling how the human brain can derive meaning from acoustic speech signals and recognize our communication partner based on seeing a face represents an important scientific endeavour.

Speech recognition depends on both the clarity of the acoustic input and on what we expect to hear. For example, in noisy listening conditions, listeners of the identical speech input can differ in their perception of what was said. Similarly for face recognition, brain responses to faces depend on expectations and do not simply reflect the presented facial features.

These findings for speech and face recognition are compatible with the more general view that perception is an active process in which incoming sensory information is interpreted with respect to expectations. The neural mechanisms supporting such integration of sensory signals and expectations, however, remain to be identified. Conflicting theoretical and computational models have been suggested for how, when, and where expectations and new sensory information are combined.