Multi-fidelity BO¶
Here we demonstrate how Multi-Fidelity Bayesian Optimization can be used to reduce the computational cost of optimization by using lower fidelity surrogate models. The goal is to learn functional dependance of the objective on input variables at low fidelities (which are cheap to compute) and use that information to quickly find the best objective value at higher fidelities (which are more expensive to compute). This assumes that there is some learnable correlation between the objective values at different fidelities.
Xopt implements the MOMF (https://botorch.org/tutorials/Multi_objective_multi_fidelity_BO)
algorithm which can be used to solve both single (this notebook) and multi-objective
(see multi-objective BO section) multi-fidelity problems. Under the hood this
algorithm attempts to solve a multi-objective optimization problem, where one
objective is the function objective and the other is a simple fidelity objective,
weighted by the cost_function
of evaluating the objective at a given fidelity.
from xopt.generators.bayesian import MultiFidelityGenerator
from xopt import Evaluator, Xopt
from xopt import VOCS
import os
import matplotlib.pyplot as plt
import numpy as np
import math
import pandas as pd
# Ignore all warnings
import warnings
warnings.filterwarnings("ignore")
SMOKE_TEST = os.environ.get("SMOKE_TEST")
N_MC_SAMPLES = 1 if SMOKE_TEST else 128
N_RESTARTS = 1 if SMOKE_TEST else 20
def test_function(input_dict):
x = input_dict["x"]
s = input_dict["s"]
return {"f": np.sin(x + (1.0 - s)) * np.exp((-s + 1) / 2)}
# define vocs
vocs = VOCS(
variables={
"x": [0, 2 * math.pi],
},
objectives={"f": "MINIMIZE"},
)
plot the test function in input + fidelity space¶
test_x = np.linspace(*vocs.bounds, 1000)
fidelities = [0.0, 0.5, 1.0]
fig, ax = plt.subplots()
for ele in fidelities:
f = test_function({"x": test_x, "s": ele})["f"]
ax.plot(test_x, f, label=f"s:{ele}")
ax.legend()
<matplotlib.legend.Legend at 0x7f7602221a90>
# create xopt object
# get and modify default generator options
generator = MultiFidelityGenerator(vocs=vocs)
# specify a custom cost function based on the fidelity parameter
generator.cost_function = lambda s: s + 0.001
generator.numerical_optimizer.n_restarts = N_RESTARTS
generator.n_monte_carlo_samples = N_MC_SAMPLES
# pass options to the generator
evaluator = Evaluator(function=test_function)
X = Xopt(vocs=vocs, generator=generator, evaluator=evaluator)
X
Xopt ________________________________ Version: 2.4.6.dev5+ga295b108.d20250107 Data size: 0 Config as YAML: dump_file: null evaluator: function: __main__.test_function function_kwargs: {} max_workers: 1 vectorized: false generator: computation_time: null custom_objective: null fixed_features: null gp_constructor: covar_modules: {} custom_noise_prior: null mean_modules: {} name: standard trainable_mean_keys: [] transform_inputs: true use_cached_hyperparameters: false use_low_noise_prior: true log_transform_acquisition_function: false max_travel_distances: null memory_length: null model: null n_candidates: 1 n_interpolate_points: null n_monte_carlo_samples: 128 name: multi_fidelity numerical_optimizer: max_iter: 2000 max_time: null n_restarts: 20 name: LBFGS reference_point: f: 100.0 s: 0.0 supports_batch_generation: true supports_multi_objective: true turbo_controller: null use_cuda: false use_pf_as_initial_points: false max_evaluations: null serialize_inline: false serialize_torch: false strict: true vocs: constants: {} constraints: {} objectives: f: MINIMIZE s: MAXIMIZE observables: [] variables: s: - 0 - 1 x: - 0.0 - 6.283185307179586
# evaluate initial points at mixed fidelities to seed optimization
X.evaluate_data(
pd.DataFrame({"x": [math.pi / 4, math.pi / 2.0, math.pi], "s": [0.0, 0.25, 0.0]})
)
x | s | f | xopt_runtime | xopt_error | |
---|---|---|---|---|---|
0 | 0.785398 | 0.00 | 1.610902 | 0.000012 | False |
1 | 1.570796 | 0.25 | 1.064601 | 0.000003 | False |
2 | 3.141593 | 0.00 | -1.387351 | 0.000003 | False |
# get the total cost of previous observations based on the cost function
X.generator.calculate_total_cost()
tensor(0.2530, dtype=torch.float64)
# run optimization until the cost budget is exhausted
# we subtract one unit to make sure we don't go over our eval budget
budget = 10
while X.generator.calculate_total_cost() < budget - 1:
X.step()
print(
f"n_samples: {len(X.data)} "
f"budget used: {X.generator.calculate_total_cost():.4} "
f"hypervolume: {X.generator.calculate_hypervolume():.4}"
)
n_samples: 4 budget used: 0.5798 hypervolume: 32.96
n_samples: 5 budget used: 1.035 hypervolume: 45.87
n_samples: 6 budget used: 1.705 hypervolume: 67.16
n_samples: 7 budget used: 2.706 hypervolume: 100.4
n_samples: 8 budget used: 3.707 hypervolume: 100.4
n_samples: 9 budget used: 4.708 hypervolume: 101.1
n_samples: 10 budget used: 4.815 hypervolume: 101.1
n_samples: 11 budget used: 5.197 hypervolume: 101.1
n_samples: 12 budget used: 5.848 hypervolume: 101.2
n_samples: 13 budget used: 6.647 hypervolume: 101.2
n_samples: 14 budget used: 6.878 hypervolume: 101.2
n_samples: 15 budget used: 7.386 hypervolume: 101.2
n_samples: 16 budget used: 8.28 hypervolume: 101.3
n_samples: 17 budget used: 8.347 hypervolume: 101.3
n_samples: 18 budget used: 8.65 hypervolume: 101.3
n_samples: 19 budget used: 9.371 hypervolume: 101.3
X.data
x | s | f | xopt_runtime | xopt_error | |
---|---|---|---|---|---|
0 | 0.785398 | 0.000000 | 1.610902e+00 | 0.000012 | False |
1 | 1.570796 | 0.250000 | 1.064601e+00 | 0.000003 | False |
2 | 3.141593 | 0.000000 | -1.387351e+00 | 0.000003 | False |
3 | 3.475122 | 0.325788 | -1.184630e+00 | 0.000012 | False |
4 | 2.850145 | 0.454439 | -3.302260e-01 | 0.000012 | False |
5 | 0.168757 | 0.668480 | 5.661482e-01 | 0.000013 | False |
6 | 0.000000 | 1.000000 | 0.000000e+00 | 0.000011 | False |
7 | 6.283185 | 1.000000 | -2.449294e-16 | 0.000012 | False |
8 | 4.540861 | 1.000000 | -9.853250e-01 | 0.000012 | False |
9 | 4.024451 | 0.106445 | -1.530337e+00 | 0.000012 | False |
10 | 4.059643 | 0.381051 | -1.361931e+00 | 0.000013 | False |
11 | 4.396973 | 0.649576 | -1.190769e+00 | 0.000012 | False |
12 | 4.446775 | 0.798632 | -1.103646e+00 | 0.000011 | False |
13 | 3.967052 | 0.229846 | -1.469275e+00 | 0.000013 | False |
14 | 4.237810 | 0.506435 | -1.279670e+00 | 0.000013 | False |
15 | 4.522101 | 0.893234 | -1.051157e+00 | 0.000013 | False |
16 | 3.791833 | 0.066314 | -1.594814e+00 | 0.000013 | False |
17 | 4.026114 | 0.301698 | -1.417761e+00 | 0.000013 | False |
18 | 4.411053 | 0.720433 | -1.149752e+00 | 0.000013 | False |
Plot the model prediction and acquisition function inside the optimization space¶
fig, ax = X.generator.visualize_model()
Plot the Pareto front¶
X.data.plot(x="f", y="s", style="o-")
<Axes: xlabel='f'>
X.data
x | s | f | xopt_runtime | xopt_error | |
---|---|---|---|---|---|
0 | 0.785398 | 0.000000 | 1.610902e+00 | 0.000012 | False |
1 | 1.570796 | 0.250000 | 1.064601e+00 | 0.000003 | False |
2 | 3.141593 | 0.000000 | -1.387351e+00 | 0.000003 | False |
3 | 3.475122 | 0.325788 | -1.184630e+00 | 0.000012 | False |
4 | 2.850145 | 0.454439 | -3.302260e-01 | 0.000012 | False |
5 | 0.168757 | 0.668480 | 5.661482e-01 | 0.000013 | False |
6 | 0.000000 | 1.000000 | 0.000000e+00 | 0.000011 | False |
7 | 6.283185 | 1.000000 | -2.449294e-16 | 0.000012 | False |
8 | 4.540861 | 1.000000 | -9.853250e-01 | 0.000012 | False |
9 | 4.024451 | 0.106445 | -1.530337e+00 | 0.000012 | False |
10 | 4.059643 | 0.381051 | -1.361931e+00 | 0.000013 | False |
11 | 4.396973 | 0.649576 | -1.190769e+00 | 0.000012 | False |
12 | 4.446775 | 0.798632 | -1.103646e+00 | 0.000011 | False |
13 | 3.967052 | 0.229846 | -1.469275e+00 | 0.000013 | False |
14 | 4.237810 | 0.506435 | -1.279670e+00 | 0.000013 | False |
15 | 4.522101 | 0.893234 | -1.051157e+00 | 0.000013 | False |
16 | 3.791833 | 0.066314 | -1.594814e+00 | 0.000013 | False |
17 | 4.026114 | 0.301698 | -1.417761e+00 | 0.000013 | False |
18 | 4.411053 | 0.720433 | -1.149752e+00 | 0.000013 | False |
# get optimal value at max fidelity, note that the actual maximum is 4.71
X.generator.get_optimum().to_dict()
{'x': {0: 4.587603930833688}, 's': {0: 1.0}}