Class and subclass probability re-estimation to adapt a classifier in the presence of concept drift Articles uri icon

publication date

  • September 2011

start page

  • 2614

end page

  • 2623

issue

  • 16

volume

  • 74

international standard serial number (ISSN)

  • 0925-2312

electronic international standard serial number (EISSN)

  • 1872-8286

abstract

  • We consider the problem of classification in environments where training and test data may come from different probability distributions. When the fundamental stationary distribution assumption made in supervised learning (and often not satisfied in practice) does not hold, the classifier performance may significantly deteriorate. Several proposals have been made to deal with classification problems where the class priors change after training, but they may fail when the class conditional data densities also change. To cope with this problem, we propose an algorithm that uses unlabeled test data to adapt the classifier outputs to new operating conditions, without re-training it. The algorithm is based on a posterior probability model with two main assumptions: (1) the classes may be decomposed in several (unknown) subclasses, and (2) all changes in data distributions arise from changes in prior subclass probabilities. Experimental results with a neural network model on synthetic and remote sensing practical settings show that the adaptation at the subclass level can get a better adjustment to the new operational conditions than the methods based on class prior changes.