You may already seen, we can specify a kernel function like this(same for mean fucntions):
k = pyGPs.cov.RBF( log_ell=-1., log_sigma=0. )
There are several points need to be noticed:
For some kernels/means, number of hyperparameters depends on the dimension of input data. You can either enter the dimension, which use default values:
m = pyGPs.mean.Linear( D=x.shape )
or you can initialze with the exact hyperparameters, you should enter as a list, one element for each dimension
m = pyGPs.mean.Linear( alpha_list=[0.2, 0.4, 0.3] )
For pyGPs.cov.RBFunit(), its signal variance is always 1 (because of unit magnitude). Therefore this function do not have a hyperparameter of “signal variance”.
Explicitly set pyGPs.cov.Noise is not necessary, because noise are already added in likelihood.
Adding and muliplying Kernels(Means) is really simple:
k = pyGPs.cov.Linear() * pyGPs.cov.RBF() k = 0.5 * pyGPs.cov.Linear() + pyGPs.cov.RBF()
Scalar will also be treated as a hyperparameter. For example, k = s1 * k1 + s2 * k2, then the list of hyperparameters is hyp = [s1, k1.hyp, s2, k2.hyp]. Scalar is passed in logorithm domain such that it will always be positive during optimization.
Beside + / * , there is also a power operator for mean functions:
m = ( pyGPs.mean.One() + pyGPs.mean.Linear(alpha_list=[0.2]) )**2
In certain cases, you may have a precomputed kernel matrix, but its non-trivial to write down the exact formula of kernel functions. Then you can specify your kernel in the following way. A precomputed kernel also fits with other kernels. In other words, it can also be composited as the way other kernels functions do.
k = pyGPs.cov.Pre(M1, M2)
M1 and M2 are your precomputed kernel matrix,
A precomputed kernel can also be composited with other kernels. You need to explictly add scalar for pyGPs.cov.Pre().
k = 0.5*pyGPs.cov.Pre(M1, M2) + pyGPs.cov.RBF()