Specifiying generator options¶
We start with the generator defaults and modify as needed for conservative
exploration, which should prevent any constraint violations. Note that because we are exploring a problem with no noise we set use_low_noise_prior=True in the GP model constructor.
In [1]:
Copied!
# set values if testing
import os
from copy import deepcopy
from xopt import Xopt, Evaluator
from xopt.generators.bayesian import BayesianExplorationGenerator
from xopt.resources.test_functions.tnk import evaluate_TNK, tnk_vocs
# Ignore all warnings
import warnings
warnings.filterwarnings("ignore")
SMOKE_TEST = os.environ.get("SMOKE_TEST")
NUM_MC_SAMPLES = 1 if SMOKE_TEST else 128
NUM_RESTARTS = 1 if SMOKE_TEST else 20
vocs = deepcopy(tnk_vocs)
vocs.objectives = {}
vocs.observables = ["y1"]
generator = BayesianExplorationGenerator(vocs=vocs)
generator.max_travel_distances = [0.25, 0.25]
generator.numerical_optimizer.n_restarts = NUM_RESTARTS
generator.n_monte_carlo_samples = NUM_MC_SAMPLES
generator.gp_constructor.use_low_noise_prior = True
evaluator = Evaluator(function=evaluate_TNK)
X = Xopt(generator=generator, evaluator=evaluator, vocs=vocs)
X
# set values if testing
import os
from copy import deepcopy
from xopt import Xopt, Evaluator
from xopt.generators.bayesian import BayesianExplorationGenerator
from xopt.resources.test_functions.tnk import evaluate_TNK, tnk_vocs
# Ignore all warnings
import warnings
warnings.filterwarnings("ignore")
SMOKE_TEST = os.environ.get("SMOKE_TEST")
NUM_MC_SAMPLES = 1 if SMOKE_TEST else 128
NUM_RESTARTS = 1 if SMOKE_TEST else 20
vocs = deepcopy(tnk_vocs)
vocs.objectives = {}
vocs.observables = ["y1"]
generator = BayesianExplorationGenerator(vocs=vocs)
generator.max_travel_distances = [0.25, 0.25]
generator.numerical_optimizer.n_restarts = NUM_RESTARTS
generator.n_monte_carlo_samples = NUM_MC_SAMPLES
generator.gp_constructor.use_low_noise_prior = True
evaluator = Evaluator(function=evaluate_TNK)
X = Xopt(generator=generator, evaluator=evaluator, vocs=vocs)
X
Out[1]:
Xopt
________________________________
Version: 2.6.7.dev55+g7aa2f3618.d20250930
Data size: 0
Config as YAML:
dump_file: null
evaluator:
function: xopt.resources.test_functions.tnk.evaluate_TNK
function_kwargs:
raise_probability: 0
random_sleep: 0
sleep: 0
max_workers: 1
vectorized: false
generator:
computation_time: null
custom_objective: null
fixed_features: null
gp_constructor:
covar_modules: {}
custom_noise_prior: null
mean_modules: {}
name: standard
trainable_mean_keys: []
transform_inputs: true
use_cached_hyperparameters: false
use_low_noise_prior: true
max_travel_distances:
- 0.25
- 0.25
model: null
n_candidates: 1
n_interpolate_points: null
n_monte_carlo_samples: 128
name: bayesian_exploration
numerical_optimizer:
max_iter: 2000
max_time: 5.0
n_restarts: 20
name: LBFGS
supports_batch_generation: true
supports_constraints: true
turbo_controller: null
use_cuda: false
max_evaluations: null
serialize_inline: false
serialize_torch: false
strict: true
vocs:
constants:
a: dummy_constant
constraints:
c1:
- GREATER_THAN
- 0.0
c2:
- LESS_THAN
- 0.5
objectives: {}
observables:
- y1
variables:
x1:
- 0.0
- 3.14159
x2:
- 0.0
- 3.14159
Run exploration¶
We start with evaluating 2 points that we know satisfy the constraints. We then run 30 exploration steps.
In [2]:
Copied!
X.evaluate_data({"x1": [1.0, 0.75], "x2": [0.7, 0.95]})
X.evaluate_data({"x1": [1.0, 0.75], "x2": [0.7, 0.95]})
Out[2]:
| x1 | x2 | a | y1 | y2 | c1 | c2 | xopt_runtime | xopt_error | |
|---|---|---|---|---|---|---|---|---|---|
| 0 | 1.00 | 0.70 | dummy_constant | 1.00 | 0.70 | 0.584045 | 0.290 | 0.000158 | False |
| 1 | 0.75 | 0.95 | dummy_constant | 0.75 | 0.95 | 0.494833 | 0.265 | 0.000134 | False |
In [3]:
Copied!
for i in range(2):
print(f"step {i}")
X.step()
for i in range(2):
print(f"step {i}")
X.step()
step 0
step 1
In [4]:
Copied!
# view the data
X.data
# view the data
X.data
Out[4]:
| x1 | x2 | a | y1 | y2 | c1 | c2 | xopt_runtime | xopt_error | |
|---|---|---|---|---|---|---|---|---|---|
| 0 | 1.000000 | 0.700000 | dummy_constant | 1.000000 | 0.700000 | 0.584045 | 0.290000 | 0.000158 | False |
| 1 | 0.750000 | 0.950000 | dummy_constant | 0.750000 | 0.950000 | 0.494833 | 0.265000 | 0.000134 | False |
| 2 | 0.000000 | 0.164602 | dummy_constant | 0.000000 | 0.164602 | -1.072906 | 0.362491 | 0.000149 | False |
| 3 | 0.785397 | 0.008041 | dummy_constant | 0.785397 | 0.008041 | -0.481748 | 0.323476 | 0.000149 | False |
In [5]:
Copied!
# plot results
ax = X.data.plot("x1", "x2")
ax.set_aspect("equal")
# plot results
ax = X.data.plot("x1", "x2")
ax.set_aspect("equal")
Introspect models, acquisition function and feasibility prediction¶
During exploration we generate Gaussian Process models of each objective and constraint. We demonstrate how they are viewed below.
In [6]:
Copied!
fig, ax = X.generator.visualize_model(show_feasibility=True, n_grid=100)
fig, ax = X.generator.visualize_model(show_feasibility=True, n_grid=100)
Generator model hyperparameters¶
In [7]:
Copied!
# print generator model hyperparameters
for name, val in X.generator.model.named_parameters():
print(f"{name}:{val}")
X.generator.model.models[2].covar_module.lengthscale
# print generator model hyperparameters
for name, val in X.generator.model.named_parameters():
print(f"{name}:{val}")
X.generator.model.models[2].covar_module.lengthscale
models.0.likelihood.noise_covar.raw_noise:Parameter containing: tensor([-22.5689], dtype=torch.float64, requires_grad=True) models.0.mean_module.raw_constant:Parameter containing: tensor(-0.2911, dtype=torch.float64, requires_grad=True) models.0.covar_module.raw_lengthscale:Parameter containing: tensor([[0.2037, 0.3774]], dtype=torch.float64, requires_grad=True) models.1.likelihood.noise_covar.raw_noise:Parameter containing: tensor([-23.3103], dtype=torch.float64, requires_grad=True) models.1.mean_module.raw_constant:Parameter containing: tensor(0.1427, dtype=torch.float64, requires_grad=True) models.1.covar_module.raw_lengthscale:Parameter containing: tensor([[0.6330, 0.1772]], dtype=torch.float64, requires_grad=True) models.2.likelihood.noise_covar.raw_noise:Parameter containing: tensor([-23.4274], dtype=torch.float64, requires_grad=True) models.2.mean_module.raw_constant:Parameter containing: tensor(-0.0952, dtype=torch.float64, requires_grad=True) models.2.covar_module.raw_lengthscale:Parameter containing: tensor([[0.2152, 0.6472]], dtype=torch.float64, requires_grad=True)
Out[7]:
Parameter containing: tensor([[0.2152, 0.6472]], dtype=torch.float64, requires_grad=True)
Examine the number of constraint violations¶
Using the convience function provided by the vocs object we can evaluate which samples violate either or both of our constraints.
In [8]:
Copied!
X.vocs.feasibility_data(X.data)
X.vocs.feasibility_data(X.data)
Out[8]:
| feasible_c1 | feasible_c2 | feasible | |
|---|---|---|---|
| 0 | True | True | True |
| 1 | True | True | True |
| 2 | False | True | False |
| 3 | False | True | False |
In [9]:
Copied!
# generate next point
X.generator.generate(1)
# generate next point
X.generator.generate(1)
Out[9]:
[{'x1': 1.570795, 'x2': 0.6066689539639962}]