Acquisition Functions
On this page, some information and references to the various acquisition functions are provided.
Info
If the acquisition function requires the best_f
parameter, simply set it to 0.0, and Odyssey will find the current best observation by itself.
Info
The model
, posterior_transform
and maximize
parameters can be ignored.
ExpectedImprovement
Taken from BoTorch ExpectedImprovement.
ExpectedImprovement
Bases: AnalyticAcquisitionFunction
Single-outcome Expected Improvement (analytic).
Computes classic Expected Improvement over the current best observed value,
using the analytic formula for a Normal posterior distribution. Unlike the
MC-based acquisition functions, this relies on the posterior at single test
point being Gaussian (and require the posterior to implement mean
and
variance
properties). Only supports the case of q=1
. The model must be
single-outcome.
EI(x) = E(max(f(x) - best_f, 0)),
where the expectation is taken over the value of stochastic function f
at x
.
Example
model = SingleTaskGP(train_X, train_Y) EI = ExpectedImprovement(model, best_f=0.2) ei = EI(test_X)
NOTE: It is strongly recommended to use LogExpectedImprovement instead of regular EI, because it solves the vanishing gradient problem by taking special care of numerical computations and can lead to substantially improved BO performance.
__init__
__init__(model: Model, best_f: Union[float, Tensor], posterior_transform: Optional[PosteriorTransform] = None, maximize: bool = True)
Single-outcome Expected Improvement (analytic).
Parameters:
-
model
(Model
) –A fitted single-outcome model.
-
best_f
(Union[float, Tensor]
) –Either a scalar or a
b
-dim Tensor (batch mode) representing the best function value observed so far (assumed noiseless). -
posterior_transform
(Optional[PosteriorTransform]
, default:None
) –A PosteriorTransform. If using a multi-output model, a PosteriorTransform that transforms the multi-output posterior into a single-output posterior is required.
-
maximize
(bool
, default:True
) –If True, consider the problem a maximization problem.
LogExpectedImprovement
Refer to BoTorch LogExpectedImprovement.
LogExpectedImprovement
Bases: AnalyticAcquisitionFunction
Logarithm of single-outcome Expected Improvement (analytic).
Computes the logarithm of the classic Expected Improvement acquisition function, in a numerically robust manner. In particular, the implementation takes special care to avoid numerical issues in the computation of the acquisition value and its gradient in regions where improvement is predicted to be virtually impossible.
See [Ament2023logei]_ for details. Formally,
LogEI(x) = log(E(max(f(x) - best_f, 0))),
where the expectation is taken over the value of stochastic function f
at x
.
Example
model = SingleTaskGP(train_X, train_Y) LogEI = LogExpectedImprovement(model, best_f=0.2) ei = LogEI(test_X)
__init__
__init__(model: Model, best_f: Union[float, Tensor], posterior_transform: Optional[PosteriorTransform] = None, maximize: bool = True)
Logarithm of single-outcome Expected Improvement (analytic).
Parameters:
-
model
(Model
) –A fitted single-outcome model.
-
best_f
(Union[float, Tensor]
) –Either a scalar or a
b
-dim Tensor (batch mode) representing the best function value observed so far (assumed noiseless). -
posterior_transform
(Optional[PosteriorTransform]
, default:None
) –A PosteriorTransform. If using a multi-output model, a PosteriorTransform that transforms the multi-output posterior into a single-output posterior is required.
-
maximize
(bool
, default:True
) –If True, consider the problem a maximization problem.
ProbabilityOfImprovement
Refer to BoTorch ProbabilityOfImprovement.
ProbabilityOfImprovement
Bases: AnalyticAcquisitionFunction
Single-outcome Probability of Improvement.
Probability of improvement over the current best observed value, computed using the analytic formula under a Normal posterior distribution. Only supports the case of q=1. Requires the posterior to be Gaussian. The model must be single-outcome.
PI(x) = P(y >= best_f), y ~ f(x)
Example
model = SingleTaskGP(train_X, train_Y) PI = ProbabilityOfImprovement(model, best_f=0.2) pi = PI(test_X)
__init__
__init__(model: Model, best_f: Union[float, Tensor], posterior_transform: Optional[PosteriorTransform] = None, maximize: bool = True)
Single-outcome Probability of Improvement.
Parameters:
-
model
(Model
) –A fitted single-outcome model.
-
best_f
(Union[float, Tensor]
) –Either a scalar or a
b
-dim Tensor (batch mode) representing the best function value observed so far (assumed noiseless). -
posterior_transform
(Optional[PosteriorTransform]
, default:None
) –A PosteriorTransform. If using a multi-output model, a PosteriorTransform that transforms the multi-output posterior into a single-output posterior is required.
-
maximize
(bool
, default:True
) –If True, consider the problem a maximization problem.
LogProbabilityOfImprovement
Refer to BoTorch LogProbabilityOfImprovement.
LogProbabilityOfImprovement
Bases: AnalyticAcquisitionFunction
Single-outcome Log Probability of Improvement.
Logarithm of the probability of improvement over the current best observed value, computed using the analytic formula under a Normal posterior distribution. Only supports the case of q=1. Requires the posterior to be Gaussian. The model must be single-outcome.
The logarithm of the probability of improvement is numerically better behaved than the original function, which can lead to significantly improved optimization of the acquisition function. This is analogous to the common practice of optimizing the log likelihood of a probabilistic model - rather the likelihood - for the sake of maximium likelihood estimation.
logPI(x) = log(P(y >= best_f)), y ~ f(x)
Example
model = SingleTaskGP(train_X, train_Y) LogPI = LogProbabilityOfImprovement(model, best_f=0.2) log_pi = LogPI(test_X)
__init__
__init__(model: Model, best_f: Union[float, Tensor], posterior_transform: Optional[PosteriorTransform] = None, maximize: bool = True)
Single-outcome Probability of Improvement.
Parameters:
-
model
(Model
) –A fitted single-outcome model.
-
best_f
(Union[float, Tensor]
) –Either a scalar or a
b
-dim Tensor (batch mode) representing the best function value observed so far (assumed noiseless). -
posterior_transform
(Optional[PosteriorTransform]
, default:None
) –A PosteriorTransform. If using a multi-output model, a PosteriorTransform that transforms the multi-output posterior into a single-output posterior is required.
-
maximize
(bool
, default:True
) –If True, consider the problem a maximization problem.
UpperConfidenceBound
Refer to BoTorch UpperConfidenceBound.
UpperConfidenceBound
Bases: AnalyticAcquisitionFunction
Single-outcome Upper Confidence Bound (UCB).
Analytic upper confidence bound that comprises of the posterior mean plus an
additional term: the posterior standard deviation weighted by a trade-off
parameter, beta
. Only supports the case of q=1
(i.e. greedy, non-batch
selection of design points). The model must be single-outcome.
UCB(x) = mu(x) + sqrt(beta) * sigma(x)
, where mu
and sigma
are the
posterior mean and standard deviation, respectively.
Example
model = SingleTaskGP(train_X, train_Y) UCB = UpperConfidenceBound(model, beta=0.2) ucb = UCB(test_X)
__init__
__init__(model: Model, beta: Union[float, Tensor], posterior_transform: Optional[PosteriorTransform] = None, maximize: bool = True) -> None
Single-outcome Upper Confidence Bound.
Parameters:
-
model
(Model
) –A fitted single-outcome GP model (must be in batch mode if candidate sets X will be)
-
beta
(Union[float, Tensor]
) –Either a scalar or a one-dim tensor with
b
elements (batch mode) representing the trade-off parameter between mean and covariance -
posterior_transform
(Optional[PosteriorTransform]
, default:None
) –A PosteriorTransform. If using a multi-output model, a PosteriorTransform that transforms the multi-output posterior into a single-output posterior is required.
-
maximize
(bool
, default:True
) –If True, consider the problem a maximization problem.