If nIters
is high enough, this process will run forever. However, if you were to look at the utility of each iteration, you would probably notice diminishing returns. Once the parameter space is thoroughly explored, the utility tends to converge to a certain value, depending on the acquisition function used. The example below shows that as time goes on, the expected improvement converges towards 0.
By default, the BayesianOptimization
function displays the progress in the viewer (browser, if not using Rstudio).
require(ParBayesianOptimization)
require(data.table)
sf <- function(x) dnorm(x,3,2)*1.5 + dnorm(x,7,1) + dnorm(x,10,2)
ScoringFunction <- function(x) {
return(list(Score = sf(x)))
}
bounds = list(x = c(0,15))
Results <- BayesianOptimization(
FUN = ScoringFunction
, bounds = bounds
, initGrid = data.table(x=c(1,4,8,12))
, bulkNew = 1
, nIters = 12
, acq = "ei"
, gsPoints = 10
, minClusterUtility = 1
, plotProgress = TRUE
)
This progress plot is updated at each iteration:
Notice how the expected improvement eventually converges to 0 (the graph shows the upper confidence bound divided by y_max). If you notice that your utility has converged, it may be a good time to stop the process manually. Keep in mind, if you stop the process manually, results will not be returned, so it is important to use the saveIntermediate
field if you plan on doing this.