Electronic International Standard Serial Number (EISSN)
Classification is used in a wide range of applications to determine the class of a new element; for example, it can be used to determine whether an object is a pedestrian based on images captured by the safety sensors of a vehicle. Classifiers are commonly implemented using electronic components and thus, they are subject to errors in memories and combinational logic. In some cases, classifiers are used in safety critical applications and thus, they must operate reliably. Therefore, there is a need to protect classifiers against errors. The k Nearest Neighbors (kNNs) classifier is a simple, yet powerful algorithm that is widely used; its protection against errors in the neighbor computations has been recently studied. This paper considers the protection of kNNs classifiers against errors in the memory that stores the dataset used to select the neighbors. Initially, the effects of errors in the most common memory configurations (unprotected, Parity-Check protected and Single Error Correction-Double Error Detection (SEC-DED) protected) are assessed. The results show that surprisingly, for most datasets, it is better to leave the memory unprotected than to use error detection codes to discard the element affected by an error in terms of tolerance. This observation is then leveraged to develop Less-is-Better Protection (LBP), a technique that does not require any additional parity bits and achieves better error tolerance than Parity-Check for single bit errors (reducing the classification errors by 59% for the Iris dataset) and SEC-DED codes for double bit errors (reducing the classification errors by 42% for the Iris dataset).
classification; memories; error tolerance; k nearest neighbors; error control codes