Marginalized Neural Network Mixtures for Large-Scale Regression Articles uri icon

authors

  • LAZARO GREDILLA, MIGUEL
  • FIGUEIRAS VIDAL, ANIBAL RAMON

publication date

  • August 2010

start page

  • 1345

end page

  • 1351

issue

  • 8

volume

  • 21

International Standard Serial Number (ISSN)

  • 2162-237X

Electronic International Standard Serial Number (EISSN)

  • 2162-2388

abstract

  • For regression tasks, traditional neural networks (NNs) have been superseded by Gaussian processes, which provide probabilistic predictions (input-dependent error bars), improved accuracy, and
    virtually no overfitting. Due to their high computational cost, in
    scenarios with massive data sets, one has to resort to sparse Gaussian
    processes, which strive to achieve similar performance with much smaller
    computational effort. In this context, we introduce a mixture of NNs
    with marginalized output weights that can both provide probabilistic
    predictions and improve on the performance of sparse Gaussian processes,
    at the same computational cost. The effectiveness of this approach is
    shown experimentally on some representative large data sets.