Multi-fidelity Multi-objective Bayesian Optimization¶
Here we attempt to solve for the constrained Pareto front of the TNK multi-objective optimization problem using Multi-Fidelity Multi-Objective Bayesian optimization. For simplicity we assume that the objective and constraint functions at lower fidelities is exactly equal to the functions at higher fidelities (this is obviously not a requirement, although for the best results lower fidelity calculations should correlate with higher fidelity ones). The algorithm should learn this relationship and use information gathered at lower fidelities to gather samples to improve the hypervolume of the Pareto front at the maximum fidelity.
TNK function $n=2$ variables: $x_i \in [0, \pi], i=1,2$
Objectives:
- $f_i(x) = x_i$
Constraints:
- $g_1(x) = -x_1^2 -x_2^2 + 1 + 0.1 \cos\left(16 \arctan \frac{x_1}{x_2}\right) \le 0$
- $g_2(x) = (x_1 - 1/2)^2 + (x_2-1/2)^2 \le 0.5$
# set values if testing
import os
from copy import deepcopy
import pandas as pd
import numpy as np
from xopt import Xopt, Evaluator
from xopt.generators.bayesian import MultiFidelityGenerator
from xopt.resources.test_functions.tnk import evaluate_TNK, tnk_vocs
from xopt.vocs import get_feasibility_data
import matplotlib.pyplot as plt
# Ignore all warnings
import warnings
warnings.filterwarnings("ignore")
SMOKE_TEST = os.environ.get("SMOKE_TEST")
N_MC_SAMPLES = 1 if SMOKE_TEST else 128
NUM_RESTARTS = 1 if SMOKE_TEST else 20
BUDGET = 0.02 if SMOKE_TEST else 10
evaluator = Evaluator(function=evaluate_TNK)
print(tnk_vocs.dict())
{'variables': {'x1': {'dtype': None, 'default_value': None, 'domain': [0.0, 3.14159], 'type': 'ContinuousVariable'}, 'x2': {'dtype': None, 'default_value': None, 'domain': [0.0, 3.14159], 'type': 'ContinuousVariable'}}, 'objectives': {'y1': {'dtype': None, 'type': 'MinimizeObjective'}, 'y2': {'dtype': None, 'type': 'MinimizeObjective'}}, 'constraints': {'c1': {'dtype': None, 'value': 0.0, 'type': 'GreaterThanConstraint'}, 'c2': {'dtype': None, 'value': 0.5, 'type': 'LessThanConstraint'}}, 'constants': {'a': {'dtype': None, 'value': 'dummy_constant', 'type': 'Constant'}}, 'observables': {}}
Set up the Multi-Fidelity Multi-objective optimization algorithm¶
Here we create the Multi-Fidelity generator object which can solve both single and multi-objective optimization problems depending on the number of objectives in VOCS. We specify a cost function as a function of fidelity parameter $s=[0,1]$ as $C(s) = s^{3.5} + 0.001$ as an example from a real life multi-fidelity simulation problem.
my_vocs = deepcopy(tnk_vocs)
generator = MultiFidelityGenerator(vocs=my_vocs, reference_point={"y1": 1.5, "y2": 1.5})
# set cost function according to approximate scaling of laser plasma accelerator
# problem, see https://journals.aps.org/prresearch/abstract/10.1103/PhysRevResearch.5.013063
generator.cost_function = lambda s: s**3.5 + 0.001
generator.numerical_optimizer.n_restarts = NUM_RESTARTS
generator.n_monte_carlo_samples = N_MC_SAMPLES
generator.gp_constructor.use_low_noise_prior = True
X = Xopt(generator=generator, evaluator=evaluator)
# evaluate at some explicit initial points
X.evaluate_data(pd.DataFrame({"x1": [1.0, 0.75], "x2": [0.75, 1.0], "s": [0.0, 0.1]}))
X
Xopt
________________________________
Version: 0.1.dev1+g3385ef356
Data size: 2
Config as YAML:
dump_file: null
evaluator:
function: xopt.resources.test_functions.tnk.evaluate_TNK
function_kwargs:
raise_probability: 0
random_sleep: 0
sleep: 0
max_workers: 1
vectorized: false
generator:
computation_time: null
custom_objective: null
fixed_features: null
gp_constructor:
covar_modules: {}
custom_noise_prior: null
mean_modules: {}
name: standard
train_config: null
train_kwargs: null
train_method: lbfgs
train_model: true
trainable_mean_keys: []
transform_inputs: true
use_cached_hyperparameters: false
use_low_noise_prior: true
max_travel_distances: null
model: null
n_candidates: 1
n_interpolate_points: null
n_monte_carlo_samples: 128
name: multi_fidelity
numerical_optimizer:
max_iter: 2000
max_time: 5.0
n_restarts: 20
name: LBFGS
reference_point:
s: 0.0
y1: 1.5
y2: 1.5
returns_id: false
supports_batch_generation: true
supports_constraints: true
supports_multi_objective: true
turbo_controller: null
use_cuda: false
use_pf_as_initial_points: false
vocs:
constants:
a:
dtype: null
type: Constant
value: dummy_constant
constraints:
c1:
dtype: null
type: GreaterThanConstraint
value: 0.0
c2:
dtype: null
type: LessThanConstraint
value: 0.5
objectives:
s:
dtype: null
type: MaximizeObjective
y1:
dtype: null
type: MinimizeObjective
y2:
dtype: null
type: MinimizeObjective
observables: {}
variables:
s:
default_value: null
domain:
- 0.0
- 1.0
dtype: null
type: ContinuousVariable
x1:
default_value: null
domain:
- 0.0
- 3.14159
dtype: null
type: ContinuousVariable
x2:
default_value: null
domain:
- 0.0
- 3.14159
dtype: null
type: ContinuousVariable
serialize_inline: false
serialize_torch: false
stopping_condition: null
strict: true
Run optimization routine¶
Instead of ending the optimization routine after an explict number of samples we end optimization once a given optimization budget has been exceeded. WARNING: This will slightly exceed the given budget
budget = BUDGET
while X.generator.calculate_total_cost() < budget:
X.step()
print(
f"n_samples: {len(X.data)} "
f"budget used: {X.generator.calculate_total_cost():.4} "
f"hypervolume: {X.generator.get_pareto_front_and_hypervolume()[-1]:.4}"
)
n_samples: 3 budget used: 0.003334 hypervolume: 0.05032
n_samples: 4 budget used: 0.004745 hypervolume: 0.05032
n_samples: 5 budget used: 0.006536 hypervolume: 0.05032
n_samples: 6 budget used: 0.008488 hypervolume: 0.05032
n_samples: 7 budget used: 0.01178 hypervolume: 0.05032
n_samples: 8 budget used: 0.01568 hypervolume: 0.1438
n_samples: 9 budget used: 0.02431 hypervolume: 0.1438
n_samples: 10 budget used: 0.03647 hypervolume: 0.2673
n_samples: 11 budget used: 0.05618 hypervolume: 0.2673
n_samples: 12 budget used: 0.08401 hypervolume: 0.3571
n_samples: 13 budget used: 0.1342 hypervolume: 0.3571
n_samples: 14 budget used: 0.2057 hypervolume: 0.458
n_samples: 15 budget used: 0.3233 hypervolume: 0.5527
n_samples: 16 budget used: 0.3568 hypervolume: 0.6098
n_samples: 17 budget used: 0.5284 hypervolume: 0.7161
n_samples: 18 budget used: 0.5725 hypervolume: 0.7161
n_samples: 19 budget used: 0.8783 hypervolume: 0.7161
n_samples: 20 budget used: 0.8834 hypervolume: 0.7161
n_samples: 21 budget used: 1.285 hypervolume: 0.8287
n_samples: 22 budget used: 1.851 hypervolume: 0.9287
n_samples: 23 budget used: 2.703 hypervolume: 1.066
n_samples: 24 budget used: 2.954 hypervolume: 1.066
n_samples: 25 budget used: 3.479 hypervolume: 1.066
n_samples: 26 budget used: 3.558 hypervolume: 1.066
n_samples: 27 budget used: 4.559 hypervolume: 1.185
n_samples: 28 budget used: 5.188 hypervolume: 1.185
n_samples: 29 budget used: 5.288 hypervolume: 1.185
n_samples: 30 budget used: 6.289 hypervolume: 1.185
n_samples: 31 budget used: 7.021 hypervolume: 1.185
n_samples: 32 budget used: 8.022 hypervolume: 1.211
n_samples: 33 budget used: 8.044 hypervolume: 1.211
n_samples: 34 budget used: 8.047 hypervolume: 1.211
n_samples: 35 budget used: 8.778 hypervolume: 1.211
n_samples: 36 budget used: 9.779 hypervolume: 1.211
n_samples: 37 budget used: 9.787 hypervolume: 1.211
n_samples: 38 budget used: 10.38 hypervolume: 1.211
Show results¶
X.data
| x1 | x2 | s | a | y1 | y2 | c1 | c2 | xopt_runtime | xopt_error | |
|---|---|---|---|---|---|---|---|---|---|---|
| 0 | 1.000000 | 0.750000 | 0.000000 | dummy_constant | 1.000000 | 0.750000 | 0.626888 | 0.312500 | 0.006607 | False |
| 1 | 0.750000 | 1.000000 | 0.100000 | dummy_constant | 0.750000 | 1.000000 | 0.626888 | 0.312500 | 0.000295 | False |
| 2 | 0.554632 | 0.792233 | 0.043587 | dummy_constant | 0.554632 | 0.792233 | 0.029263 | 0.088385 | 0.002484 | False |
| 3 | 0.340538 | 0.425396 | 0.107804 | dummy_constant | 0.340538 | 0.425396 | -0.683730 | 0.030994 | 0.003108 | False |
| 4 | 0.692135 | 0.398991 | 0.129960 | dummy_constant | 0.692135 | 0.398991 | -0.312680 | 0.047119 | 0.002451 | False |
| 5 | 2.302489 | 0.135444 | 0.137011 | dummy_constant | 2.302489 | 0.135444 | 4.260832 | 3.381868 | 0.000285 | False |
| 6 | 0.217977 | 0.919737 | 0.176197 | dummy_constant | 0.217977 | 0.919737 | -0.023018 | 0.255716 | 0.000270 | False |
| 7 | 0.973479 | 0.244144 | 0.188337 | dummy_constant | 0.973479 | 0.244144 | 0.077648 | 0.289644 | 0.000265 | False |
| 8 | 1.054574 | 0.001324 | 0.248316 | dummy_constant | 1.054574 | 0.001324 | 0.012148 | 0.556230 | 0.000266 | False |
| 9 | 0.169354 | 0.982345 | 0.276779 | dummy_constant | 0.169354 | 0.982345 | 0.085392 | 0.341984 | 0.000266 | False |
| 10 | 0.734218 | 0.549672 | 0.320839 | dummy_constant | 0.734218 | 0.549672 | -0.093345 | 0.057325 | 0.000270 | False |
| 11 | 0.838377 | 0.513164 | 0.355676 | dummy_constant | 0.838377 | 0.513164 | 0.046628 | 0.114672 | 0.000271 | False |
| 12 | 0.052423 | 1.009215 | 0.422831 | dummy_constant | 0.052423 | 1.009215 | -0.046197 | 0.459625 | 0.000272 | False |
| 13 | 0.531113 | 0.829408 | 0.468837 | dummy_constant | 0.531113 | 0.829408 | 0.065178 | 0.109478 | 0.000262 | False |
| 14 | 0.087958 | 1.048280 | 0.541090 | dummy_constant | 0.087958 | 1.048280 | 0.083692 | 0.470389 | 0.000262 | False |
| 15 | 1.037335 | 0.066907 | 0.375751 | dummy_constant | 1.037335 | 0.066907 | 0.029105 | 0.476298 | 0.000266 | False |
| 16 | 0.997559 | 0.180696 | 0.603329 | dummy_constant | 0.997559 | 0.180696 | 0.124032 | 0.349520 | 0.000260 | False |
| 17 | 0.025314 | 1.072038 | 0.407274 | dummy_constant | 0.025314 | 1.072038 | 0.056956 | 0.552554 | 0.000274 | False |
| 18 | 0.678312 | 0.698113 | 0.712147 | dummy_constant | 0.678312 | 0.698113 | -0.149894 | 0.071044 | 0.000258 | False |
| 19 | 0.000357 | 0.000000 | 0.208940 | dummy_constant | 0.000357 | 0.000000 | -1.100000 | 0.499643 | 0.000263 | False |
| 20 | 0.913921 | 0.478592 | 0.769761 | dummy_constant | 0.913921 | 0.478592 | 0.050790 | 0.171789 | 0.000200 | False |
| 21 | 0.469531 | 0.974836 | 0.849832 | dummy_constant | 0.469531 | 0.974836 | 0.108493 | 0.226398 | 0.000261 | False |
| 22 | 1.053878 | 0.085081 | 0.954948 | dummy_constant | 1.053878 | 0.085081 | 0.090080 | 0.478939 | 0.000260 | False |
| 23 | 0.167560 | 0.897330 | 0.672348 | dummy_constant | 0.167560 | 0.897330 | -0.068484 | 0.268387 | 0.000263 | False |
| 24 | 0.900264 | 0.128169 | 0.831700 | dummy_constant | 0.900264 | 0.128169 | -0.109298 | 0.298469 | 0.000294 | False |
| 25 | 0.734856 | 2.925831 | 0.482777 | dummy_constant | 0.734856 | 2.925831 | 8.170489 | 5.939813 | 0.000294 | False |
| 26 | 0.118174 | 1.057996 | 1.000000 | dummy_constant | 0.118174 | 1.057996 | 0.154064 | 0.457150 | 0.000262 | False |
| 27 | 1.272555 | 0.929550 | 0.875156 | dummy_constant | 1.272555 | 0.929550 | 1.561888 | 0.781355 | 0.000260 | False |
| 28 | 0.355096 | 0.524563 | 0.517329 | dummy_constant | 0.355096 | 0.524563 | -0.499206 | 0.021601 | 0.000260 | False |
| 29 | 0.952683 | 0.280643 | 1.000000 | dummy_constant | 0.952683 | 0.280643 | -0.000796 | 0.253040 | 0.000288 | False |
| 30 | 0.044677 | 0.687869 | 0.914372 | dummy_constant | 0.044677 | 0.687869 | -0.575657 | 0.242614 | 0.000260 | False |
| 31 | 0.229246 | 0.969491 | 1.000000 | dummy_constant | 0.229246 | 0.969491 | 0.076466 | 0.293730 | 0.000216 | False |
| 32 | 0.440509 | 0.652168 | 0.329321 | dummy_constant | 0.440509 | 0.652168 | -0.280949 | 0.026694 | 0.000260 | False |
| 33 | 0.301959 | 0.121618 | 0.175946 | dummy_constant | 0.301959 | 0.121618 | -0.992800 | 0.182393 | 0.000271 | False |
| 34 | 0.528128 | 0.714401 | 0.914090 | dummy_constant | 0.528128 | 0.714401 | -0.138277 | 0.046759 | 0.000263 | False |
| 35 | 0.462432 | 0.900892 | 1.000000 | dummy_constant | 0.462432 | 0.900892 | -0.000859 | 0.162126 | 0.000274 | False |
| 36 | 0.717995 | 0.693275 | 0.236240 | dummy_constant | 0.717995 | 0.693275 | -0.099953 | 0.084877 | 0.000279 | False |
| 37 | 0.685149 | 0.180014 | 0.862140 | dummy_constant | 0.685149 | 0.180014 | -0.441578 | 0.136671 | 0.000264 | False |
Plot results¶
Here we plot the resulting observations in input space, colored by feasibility (neglecting the fact that these data points are at varying fidelities).
fig, ax = plt.subplots()
theta = np.linspace(0, np.pi / 2)
r = np.sqrt(1 + 0.1 * np.cos(16 * theta))
x_1 = r * np.sin(theta)
x_2_lower = r * np.cos(theta)
x_2_upper = (0.5 - (x_1 - 0.5) ** 2) ** 0.5 + 0.5
z = np.zeros_like(x_1)
# ax2.plot(x_1, x_2_lower,'r')
ax.fill_between(x_1, z, x_2_lower, fc="white")
circle = plt.Circle(
(0.5, 0.5), 0.5**0.5, color="r", alpha=0.25, zorder=0, label="Valid Region"
)
ax.add_patch(circle)
history = pd.concat(
[X.data, get_feasibility_data(tnk_vocs, X.data)], axis=1, ignore_index=False
)
ax.plot(*history[["x1", "x2"]][history["feasible"]].to_numpy().T, ".C1")
ax.plot(*history[["x1", "x2"]][~history["feasible"]].to_numpy().T, ".C2")
ax.set_xlim(0, 3.14)
ax.set_ylim(0, 3.14)
ax.set_xlabel("x1")
ax.set_ylabel("x2")
ax.set_aspect("equal")
Plot path through input space¶
ax = history.hist(["x1", "x2", "s"], bins=20)
history.plot(y=["x1", "x2", "s"])
<Axes: >
Plot the acquisition function¶
Here we plot the acquisition function at a small set of fidelities $[0, 0.5, 1.0]$.
fidelities = [0.0, 0.5, 1.0]
for fidelity in fidelities:
X.generator.visualize_model(
variable_names=["x1", "x2"],
reference_point={"s": fidelity},
)
# examine lengthscale of the first objective
list(X.generator.model.models[0].named_parameters())
[('likelihood.noise_covar.raw_noise',
Parameter containing:
tensor([-139.5713], requires_grad=True)),
('mean_module.raw_constant',
Parameter containing:
tensor(0.8210, requires_grad=True)),
('covar_module.raw_lengthscale',
Parameter containing:
tensor([[ 0.3701, 19.3237, 40.5192]], requires_grad=True))]