Electronic institutions and neural computing providing law-compliance privacy for trusting agents Articles uri icon

publication date

  • November 2016

International Standard Serial Number (ISSN)

  • 1570-8683

abstract

  • In this paper we present an integral solution for law-compliance privacy-protection into trust models for agent systems. Several privacy issues are concerned into trust relationships. Specifically, we define which privacy rights must legally be guaranteed in trusting communities of agents. From them, we describe additional interaction protocols that are required to implement such guarantees. Next, we apply additional message exchanges into a specific application domain (the Agent Trust and Reputation testbed) using JADE agent platform. The decisions about how to apply these control mechanisms (about when to launch the corresponding JADE protocol) has been efficiently carried out by neural computing. It uses past behavior of agents to decide (classify) which agents are worthy to share privacy with, considering which number of past interactions we should take into account. Furthermore, we also enumerate the corresponding privacy violations that would have taken place if these control mechanisms (in form of interaction protocols) were ignored or misused. From the possible existence of privacy violations, a regulatory structure is required to address (prevent and fix) the corresponding harmful consequences. We use Islander (an electronic institution editor) to formally define the scenes where privacy violation may be produced, attached to the ways to repair it: the defeasible actions that could voluntarily reduce or eliminate the privacy damage, and the obligations that the electronic institution would impose as penalties.