In this study we present the results of evaluating the sonification protocol of a new assistive product aiming to help the visually impaired in perceiving their surroundings through sounds organized in different cognitive profiles. The evaluation was carried out with 17 sighted and 11 visually impaired participants. The experiment was designed over both virtual and real environments and divided into 4 virtual reality based tests and one real life test. Finally, four participants became experts by means of longer and deeper trainings and then participated in a focus group at the end of the process. Both quantitative and qualitative results showed that the proposed system is able to effectively represent the spatial configuration of objects through sounds. However, important limitations have been found in the sample used (some important demographic characteristics are intercorrelated, impeding segregated analysis), the usability of the most complex profile, and even the special difficulties faced by totally blind participants relative to the sighted and low vision ones.