Endowing artificial conversational agents with personality is a very promising way to obtain more believable user interactions with robots and computers. However, although many authors have studied how to create an agent's personality and how it affects performance and user satisfaction, less attention has been paid to assess whether the designed agent's personality corresponds to the users' perception, whether it is easily recognizable, and what is the effect that the user's own personality has in the discrimination of the agents' personality. In this paper we present an assessment framework to address these issues in an integrated way, which in our opinion offers enough flexibility to consider the diversity of application domains and evaluation approaches that can be found in the literature. The framework is based on numerical measures, which facilitate the interpretation of results and makes it possible to compare and rank different agents with respect to the user's perception of the rendered personality. In addition, we have developed a tool that implements the framework, which may be very useful for researchers in order to easily evaluate different agent personalities. (C) 2014 Elsevier Ltd. All rights reserved.