Multi-fidelity BO¶
Here we demonstrate how Multi-Fidelity Bayesian Optimization can be used to reduce the computational cost of optimization by using lower fidelity surrogate models. The goal is to learn functional dependance of the objective on input variables at low fidelities (which are cheap to compute) and use that information to quickly find the best objective value at higher fidelities (which are more expensive to compute). This assumes that there is some learnable correlation between the objective values at different fidelities.
Xopt implements the MOMF (https://botorch.org/tutorials/Multi_objective_multi_fidelity_BO)
algorithm which can be used to solve both single (this notebook) and multi-objective
(see multi-objective BO section) multi-fidelity problems. Under the hood this
algorithm attempts to solve a multi-objective optimization problem, where one
objective is the function objective and the other is a simple fidelity objective,
weighted by the cost_function of evaluating the objective at a given fidelity.
from xopt.generators.bayesian import MultiFidelityGenerator
from xopt import Evaluator, Xopt
from xopt import VOCS
import os
import matplotlib.pyplot as plt
import numpy as np
import math
import pandas as pd
# Ignore all warnings
import warnings
warnings.filterwarnings("ignore")
SMOKE_TEST = os.environ.get("SMOKE_TEST")
N_MC_SAMPLES = 1 if SMOKE_TEST else 128
N_RESTARTS = 1 if SMOKE_TEST else 20
def test_function(input_dict):
x = input_dict["x"]
s = input_dict["s"]
return {"f": np.sin(x + (1.0 - s)) * np.exp((-s + 1) / 2)}
# define vocs
vocs = VOCS(
variables={
"x": [0, 2 * math.pi],
},
objectives={"f": "MINIMIZE"},
)
/home/runner/work/Xopt/Xopt/.venv/lib/python3.12/site-packages/pyro/ops/stats.py:527: SyntaxWarning: invalid escape sequence '\g'
we have :math:`ES^{*}(P,Q) \ge ES^{*}(Q,Q)` with equality holding if and only if :math:`P=Q`, i.e.
plot the test function in input + fidelity space¶
test_x = np.linspace(*vocs.bounds, 1000)
fidelities = [0.0, 0.5, 1.0]
fig, ax = plt.subplots()
for ele in fidelities:
f = test_function({"x": test_x, "s": ele})["f"]
ax.plot(test_x, f, label=f"s:{ele}")
ax.legend()
<matplotlib.legend.Legend at 0x7efc1142e150>
# create xopt object
# get and modify default generator options
generator = MultiFidelityGenerator(vocs=vocs)
generator.gp_constructor.use_low_noise_prior = True
# specify a custom cost function based on the fidelity parameter
generator.cost_function = lambda s: s + 0.001
generator.numerical_optimizer.n_restarts = N_RESTARTS
generator.n_monte_carlo_samples = N_MC_SAMPLES
# pass options to the generator
evaluator = Evaluator(function=test_function)
X = Xopt(generator=generator, evaluator=evaluator)
X
Xopt
________________________________
Version: 0.1.dev1+gb834d2348
Data size: 0
Config as YAML:
dump_file: null
evaluator:
function: __main__.test_function
function_kwargs: {}
max_workers: 1
vectorized: false
generator:
computation_time: null
custom_objective: null
fixed_features: null
gp_constructor:
covar_modules: {}
custom_noise_prior: null
mean_modules: {}
name: standard
train_config: null
train_kwargs: null
train_method: lbfgs
train_model: true
trainable_mean_keys: []
transform_inputs: true
use_cached_hyperparameters: false
use_low_noise_prior: true
max_travel_distances: null
model: null
n_candidates: 1
n_interpolate_points: null
n_monte_carlo_samples: 128
name: multi_fidelity
numerical_optimizer:
max_iter: 2000
max_time: 5.0
n_restarts: 20
name: LBFGS
reference_point:
f: 100.0
s: 0.0
returns_id: false
supports_batch_generation: true
supports_constraints: true
supports_multi_objective: true
turbo_controller: null
use_cuda: false
use_pf_as_initial_points: false
vocs:
constants: {}
constraints: {}
objectives:
f:
dtype: null
type: MinimizeObjective
s:
dtype: null
type: MaximizeObjective
observables: {}
variables:
s:
default_value: null
domain:
- 0.0
- 1.0
dtype: null
type: ContinuousVariable
x:
default_value: null
domain:
- 0.0
- 6.283185307179586
dtype: null
type: ContinuousVariable
serialize_inline: false
serialize_torch: false
stopping_condition: null
strict: true
# evaluate initial points at mixed fidelities to seed optimization
X.evaluate_data(
pd.DataFrame({"x": [math.pi / 4, math.pi / 2.0, math.pi], "s": [0.0, 0.25, 0.0]})
)
| x | s | f | xopt_runtime | xopt_error | |
|---|---|---|---|---|---|
| 0 | 0.785398 | 0.00 | 1.610902 | 0.000010 | False |
| 1 | 1.570796 | 0.25 | 1.064601 | 0.000003 | False |
| 2 | 3.141593 | 0.00 | -1.387351 | 0.000002 | False |
# get the total cost of previous observations based on the cost function
X.generator.calculate_total_cost()
tensor(0.2530)
# run optimization until the cost budget is exhausted
# we subtract one unit to make sure we don't go over our eval budget
budget = 10
while X.generator.calculate_total_cost() < budget - 1:
X.step()
print(
f"n_samples: {len(X.data)} "
f"budget used: {X.generator.calculate_total_cost():.4} "
f"hypervolume: {X.generator.get_pareto_front_and_hypervolume()[-1]:.4}"
)
n_samples: 4 budget used: 0.5811 hypervolume: 33.17
n_samples: 5 budget used: 1.035 hypervolume: 45.88
n_samples: 6 budget used: 1.7 hypervolume: 66.92
n_samples: 7 budget used: 2.692 hypervolume: 99.7
n_samples: 8 budget used: 3.693 hypervolume: 100.6
n_samples: 9 budget used: 4.694 hypervolume: 101.1
n_samples: 10 budget used: 5.324 hypervolume: 101.1
n_samples: 11 budget used: 6.325 hypervolume: 101.1
n_samples: 12 budget used: 7.132 hypervolume: 101.2
n_samples: 13 budget used: 8.133 hypervolume: 101.2
n_samples: 14 budget used: 8.172 hypervolume: 101.2
n_samples: 15 budget used: 8.296 hypervolume: 101.2
n_samples: 16 budget used: 8.764 hypervolume: 101.2
n_samples: 17 budget used: 8.98 hypervolume: 101.2
n_samples: 18 budget used: 9.542 hypervolume: 101.3
X.data
| x | s | f | xopt_runtime | xopt_error | |
|---|---|---|---|---|---|
| 0 | 0.785398 | 0.000000 | 1.610902e+00 | 0.000010 | False |
| 1 | 1.570796 | 0.250000 | 1.064601e+00 | 0.000003 | False |
| 2 | 3.141593 | 0.000000 | -1.387351e+00 | 0.000002 | False |
| 3 | 4.012400 | 0.327125 | -1.399437e+00 | 0.000012 | False |
| 4 | 3.718634 | 0.452780 | -1.185794e+00 | 0.000010 | False |
| 5 | 0.000000 | 0.663963 | 3.900791e-01 | 0.000010 | False |
| 6 | 0.000000 | 0.990973 | 9.067354e-03 | 0.000012 | False |
| 7 | 6.283185 | 1.000000 | -2.449294e-16 | 0.000012 | False |
| 8 | 4.244120 | 1.000000 | -8.923508e-01 | 0.000012 | False |
| 9 | 4.589015 | 0.629415 | -1.166980e+00 | 0.000009 | False |
| 10 | 0.000000 | 1.000000 | 0.000000e+00 | 0.000010 | False |
| 11 | 4.535682 | 0.806188 | -1.101595e+00 | 0.000013 | False |
| 12 | 4.776892 | 1.000000 | -9.979204e-01 | 0.000009 | False |
| 13 | 3.823008 | 0.037732 | -1.613612e+00 | 0.000010 | False |
| 14 | 3.883993 | 0.123031 | -1.548527e+00 | 0.000011 | False |
| 15 | 4.222836 | 0.467094 | -1.304100e+00 | 0.000010 | False |
| 16 | 3.976080 | 0.214329 | -1.479370e+00 | 0.000010 | False |
| 17 | 4.312774 | 0.561164 | -1.244394e+00 | 0.000010 | False |
Plot the model prediction and acquisition function inside the optimization space¶
fig, ax = X.generator.visualize_model()
Plot the Pareto front¶
X.data.plot(x="f", y="s", style="o-")
<Axes: xlabel='f'>
X.data
| x | s | f | xopt_runtime | xopt_error | |
|---|---|---|---|---|---|
| 0 | 0.785398 | 0.000000 | 1.610902e+00 | 0.000010 | False |
| 1 | 1.570796 | 0.250000 | 1.064601e+00 | 0.000003 | False |
| 2 | 3.141593 | 0.000000 | -1.387351e+00 | 0.000002 | False |
| 3 | 4.012400 | 0.327125 | -1.399437e+00 | 0.000012 | False |
| 4 | 3.718634 | 0.452780 | -1.185794e+00 | 0.000010 | False |
| 5 | 0.000000 | 0.663963 | 3.900791e-01 | 0.000010 | False |
| 6 | 0.000000 | 0.990973 | 9.067354e-03 | 0.000012 | False |
| 7 | 6.283185 | 1.000000 | -2.449294e-16 | 0.000012 | False |
| 8 | 4.244120 | 1.000000 | -8.923508e-01 | 0.000012 | False |
| 9 | 4.589015 | 0.629415 | -1.166980e+00 | 0.000009 | False |
| 10 | 0.000000 | 1.000000 | 0.000000e+00 | 0.000010 | False |
| 11 | 4.535682 | 0.806188 | -1.101595e+00 | 0.000013 | False |
| 12 | 4.776892 | 1.000000 | -9.979204e-01 | 0.000009 | False |
| 13 | 3.823008 | 0.037732 | -1.613612e+00 | 0.000010 | False |
| 14 | 3.883993 | 0.123031 | -1.548527e+00 | 0.000011 | False |
| 15 | 4.222836 | 0.467094 | -1.304100e+00 | 0.000010 | False |
| 16 | 3.976080 | 0.214329 | -1.479370e+00 | 0.000010 | False |
| 17 | 4.312774 | 0.561164 | -1.244394e+00 | 0.000010 | False |
# get optimal value at max fidelity, note that the actual maximum is 4.71
X.generator.get_optimum().to_dict()
{'x': {0: 4.665072411513353}, 's': {0: 1.0}}