Compare performance of conventional CBFs vs. CBFs learned via Gaussian Processes. Incorporate "smart polling" into learning-based GP-CBFs, intelligently selecting a next point to explore by choosing a nearby point of high covariance.
Paper on GIBO: https://arxiv.org/pdf/2106.11899
- GIBO allows us to poll points of high covariance to gain information about our function more effectively.
- However, polling becomes locally "Trapped" with small step sizes
GP regression on 1D function. Implement experiments with at least 3 kernels (SE, Matern kernels). For each kernel, fit hyperparameters, then compare the posterior mean and predictive variance.
Deliverables/Notes:
- Plot is under GP_Example_Plots. ( GP_Example_Plots/Stage1_Plots.png )
- We saw strongest performance from Linear + Matern (measured by lowest loss). This makes sense as the linear kernel accounts for the linear bias in our underlying function.

Kernel smoothness.
- matern 5/2 is twice differentiable, while Squared exponential is infinitely differentiable. In the image below, I cranked up the rate parameter of the initial linear scale. This shows that Matern 5/2 was much noisier around the edges as compared to the SE kernel. This seems to imply that infinitely differentiable kernels will have less noise and be more smooth.
- Another Note: we do see that final loss of Matern 5/2 is slightly higher, indicating worse performance.

- GIBO tends to get Stuck in local regions. We can shift this by changing the step size that theta t+1 is updated by, but this seemed to be a recurring issue.