Laplace Approximation for Divisive Gaussian Processes for Nonstationary Regression Articles uri icon

publication date

  • March 2016

start page

  • 618

end page

  • 624

issue

  • 3

volume

  • 38

International Standard Serial Number (ISSN)

  • 0162-8828

Electronic International Standard Serial Number (EISSN)

  • 1939-3539

abstract

  • The standard Gaussian Process regression (GP) is usually formulated under stationary hypotheses: The noise power is considered constant throughout the input space and the covariance of the prior distribution is typically modeled as depending only on the difference between input samples. These assumptions can be too restrictive and unrealistic for many real-world problems. Although nonstationarity can be achieved using specific covariance functions, they require a prior knowledge of the kind of nonstationarity, not available for most applications. In this paper we propose to use the Laplace approximation to make inference in a divisive GP model to perform nonstationary regression, including heteroscedastic noise cases. The log-concavity of the likelihood ensures a unimodal posterior and makes that the Laplace approximation converges to a unique maximum. The characteristics of the likelihood also allow to obtain accurate posterior approximations when compared to the Expectation Propagation (EP) approximations and the asymptotically exact posterior provided by a Markov Chain Monte Carlo implementation with Elliptical Slice Sampling (ESS), but at a reduced computational load with respect to both, EP and ESS.

keywords

  • gaussian processes; nonstationary regression; laplace approximation; heteroscedastic regression