An adaptive attention mechanism is a requirement when an autonomous robot has to deal with real world environments. In this paper we present a novel cognitive architecture which enables integrated and efficient filtering of multiple modality sensory information. The proposed attention mechanism is based on contexts that determine what sensorimotor data is relevant to the current situation. These contexts are used as a mean to adaptively select constrained cognitive focus within the vast multimodal sensory space. In this framework, the focus of attention can be directed to meaningful complex percepts, thus allowing the implementation of higher cognitive capabilities. Sonar, contact, and visual sensory modalities have been used in the perception process, and the motor capability of the physical agent is provided by a differential wheel drive system. The testing of this artificial attention approach, carried out initially in the domain of counterpart recognition and chasing, has demonstrated both a great decrease in computation power requirements and ease of multimodal integration for cognitive representations.