Skip to content

Commit

Permalink
Initial commit
Browse files Browse the repository at this point in the history
  • Loading branch information
dme65 committed Nov 21, 2019
0 parents commit 8a944cd
Show file tree
Hide file tree
Showing 12 changed files with 1,337 additions and 0 deletions.
2 changes: 2 additions & 0 deletions CONTRIBUTORS.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
Code written by:
- David Eriksson <[email protected]>
41 changes: 41 additions & 0 deletions LICENSE.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,41 @@
"License" shall mean the terms and conditions for use, reproduction, and distribution as defined by the text below.

"You" (or "Your") shall mean an individual or Legal Entity exercising permissions granted by this License.

"Legal Entity" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, "control" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity.

"Source" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files.

"Object" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types.

"Work" shall mean the work of authorship, whether in Source or Object form, made available under this License.

This License governs use of the accompanying Work, and your use of the Work constitutes acceptance of this License.

You may use this Work for any non-commercial purpose, subject to the restrictions in this License. Some purposes which can be non-commercial are teaching, academic research, and personal experimentation. You may also distribute this Work with books or other teaching materials, or publish the Work on websites, that are intended to teach the use of the Work.

You may not use or distribute this Work, or any derivative works, outputs, or results from the Work, in any form for commercial purposes. Non-exhaustive examples of commercial purposes would be running business operations, licensing, leasing, or selling the Work, or distributing the Work for use with commercial products.

You may modify this Work and distribute the modified Work for non-commercial purposes, however, you may not grant rights to the Work or derivative works that are broader than or in conflict with those provided by this License. For example, you may not distribute modifications of the Work under terms that would permit commercial use, or under terms that purport to require the Work or derivative works to be sublicensed to others.

In return, we require that you agree:

1. Not to remove any copyright or other notices from the Work.

2. That if you distribute the Work in Source or Object form, you will include a verbatim copy of this License.

3. That if you distribute derivative works of the Work in Source form, you do so only under a license that includes all of the provisions of this License and is not in conflict with this License, and if you distribute derivative works of the Work solely in Object form you do so only under a license that complies with this License.

4. That if you have modified the Work or created derivative works from the Work, and distribute such modifications or derivative works, you will cause the modified files to carry prominent notices so that recipients know that they are not receiving the original Work. Such notices must state: (i) that you have changed the Work; and (ii) the date of any changes.

5. If you publicly use the Work or any output or result of the Work, you will provide a notice with such use that provides any person who uses, views, accesses, interacts with, or is otherwise exposed to the Work (i) with information of the nature of the Work, (ii) with a link to the Work, and (iii) a notice that the Work is available under this License.

6. THAT THE WORK COMES "AS IS", WITH NO WARRANTIES. THIS MEANS NO EXPRESS, IMPLIED OR STATUTORY WARRANTY, INCLUDING WITHOUT LIMITATION, WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE OR ANY WARRANTY OF TITLE OR NON-INFRINGEMENT. ALSO, YOU MUST PASS THIS DISCLAIMER ON WHENEVER YOU DISTRIBUTE THE WORK OR DERIVATIVE WORKS.

7. THAT NEITHER UBER TECHNOLOGIES, INC. NOR ANY OF ITS AFFILIATES, SUPPLIERS, SUCCESSORS, NOR ASSIGNS WILL BE LIABLE FOR ANY DAMAGES RELATED TO THE WORK OR THIS LICENSE, INCLUDING DIRECT, INDIRECT, SPECIAL, CONSEQUENTIAL OR INCIDENTAL DAMAGES, TO THE MAXIMUM EXTENT THE LAW PERMITS, NO MATTER WHAT LEGAL THEORY IT IS BASED ON. ALSO, YOU MUST PASS THIS LIMITATION OF LIABILITY ON WHENEVER YOU DISTRIBUTE THE WORK OR DERIVATIVE WORKS.

8. That if you sue anyone over patents that you think may apply to the Work or anyone's use of the Work, your license to the Work ends automatically.

9. That your rights under the License end automatically if you breach it in any way.

10. Uber Technologies, Inc. reserves all rights not expressly granted to you in this License.
95 changes: 95 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,95 @@
## Overview

This is the code-release for the TuRBO algorithm from ***Scalable Global Optimization via Local Bayesian Optimization*** appearing in NeurIPS 2019. This is an implementation for the noise-free case and may not work well if observations are noisy as the center of the trust region should be chosen based on the posterior mean in this case.

Note that TuRBO is a **minimization** algorithm, so please make sure you reformulate potential maximization problems.

## Benchmark functions

### Robot pushing
The original code for the robot pushing problem is available at https://github.com/zi-w/Ensemble-Bayesian-Optimization. We have made the following changes to the code when running our experiments:

1. We turned off the visualization, which speeds up the function evaluations.
2. We replaced all instances of ```np.random.normal(0, 0.01)``` by ```np.random.normal(0, 1e-6)``` in ```push_utils.py```. This makes the function close to noise-free. Another option is to average over several evaluations using the original code
3. We flipped the sign of the objective function to turn this into a minimization problem.

Dependencies: ```numpy ```, ```pygame```, ```box2d-py```

### Rover
The original code for the robot pushing problem is available at https://github.com/zi-w/Ensemble-Bayesian-Optimization. We used the large version of the problem, which has 60 dimensions. We have flipped the sign of the objective function to turn this into a minimization problem.

Dependencies: ```numpy```, ```scipy```

### Lunar

The lunar code is available in the OpenAI gym: https://github.com/openai/gym. The goal of the problem is to learn the parameter values of a controller for the lunar lander. The controller we learn is a modification of the original heuristic controller which takes the form:

```
def heuristic_Controller(s, w):
angle_targ = s[0] * w[0] + s[2] * w[1]
if angle_targ > w[2]:
angle_targ = w[2]
if angle_targ < -w[2]:
angle_targ = -w[2]
hover_targ = w[3] * np.abs(s[0])
angle_todo = (angle_targ - s[4]) * w[4] - (s[5]) * w[5]
hover_todo = (hover_targ - s[1]) * w[6] - (s[3]) * w[7]
if s[6] or s[7]:
angle_todo = w[8]
hover_todo = -(s[3]) * w[9]
a = 0
if hover_todo > np.abs(angle_todo) and hover_todo > w[10]:
a = 2
elif angle_todo < -w[11]:
a = 3
elif angle_todo > +w[11]:
a = 1
return a
```

We use the constraints 0 <= w_i <= 2 for all parameters.

For more information about the logic behind this controller and how to integrate it with ```gym```, take a look at the original heuristic controller source code: https://github.com/openai/gym/blob/master/gym/envs/box2d/lunar_lander.py#L364

Dependencies: ```gym```, ```box2d-py```

### Cosmological constant
The code for the cosmological constant problem is available here: https://ascl.net/1306.012. You need to follow the instructions and compile the FORTRAN code. This gives you an executable ```CAMB``` that you can call to run the simulation.

The parameter names and bounds that we tune are the following:

```
ombh2: [0.01, 0.25]
omch2: [0.01, 0.25]
omnuh2: [0.01, 0.25]
omk: [0.01, 0.25]
hubble: [52.5, 100]
temp_cmb: [2.7, 2.8]
hefrac: [0.2, 0.3]
mneu: [2.9, 3.09]
scalar_amp: [1.5e-9, 2.6e-8]
scalar_spec_ind: [0.72, 5]
rf_fudge: [0, 100]
rf_fudge_he: [0, 100]
```

## Examples
Check the examples folder for two examples on how to use Turbo-1 and Turbo-n.

## Citing us

A pre-print of our paper is available at: https://arxiv.org/abs/1910.01739

```
@article{eriksson2019scalable,
title={Scalable Global Optimization via Local Bayesian Optimization},
author={Eriksson, David and Pearce, Michael and Gardner, Jacob R and Turner, Ryan and Poloczek, Matthias},
journal={arXiv preprint arXiv:1910.01739},
year={2019}
}
```

The link and citation key will be updated when the camera-ready version of the paper is available.
254 changes: 254 additions & 0 deletions examples/Turbo1.ipynb

Large diffs are not rendered by default.

247 changes: 247 additions & 0 deletions examples/TurboM.ipynb

Large diffs are not rendered by default.

3 changes: 3 additions & 0 deletions requirements.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
numpy==1.17.3
torch==1.3.0
gpytorch==0.3.6
8 changes: 8 additions & 0 deletions setup.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
from setuptools import setup, find_packages

setup(
name="turbo",
version="0.0.1",
packages=find_packages(),
install_requires=["numpy>=1.17.3", "torch>=1.3.0", "gpytorch>=0.3.6"],
)
2 changes: 2 additions & 0 deletions turbo/__init__.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
from .turbo_1 import Turbo1
from .turbo_m import TurboM
98 changes: 98 additions & 0 deletions turbo/gp.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,98 @@
###############################################################################
# Copyright (c) 2019 Uber Technologies, Inc. #
# #
# Licensed under the Uber Non-Commercial License (the "License"); #
# you may not use this file except in compliance with the License. #
# You may obtain a copy of the License at the root directory of this project. #
# #
# See the License for the specific language governing permissions and #
# limitations under the License. #
###############################################################################

import math

import gpytorch
import numpy as np
import torch
from gpytorch.constraints.constraints import Interval
from gpytorch.distributions import MultivariateNormal
from gpytorch.kernels import MaternKernel, ScaleKernel
from gpytorch.likelihoods import GaussianLikelihood
from gpytorch.means import ConstantMean
from gpytorch.mlls import ExactMarginalLogLikelihood
from gpytorch.models import ExactGP


# GP Model
class GP(ExactGP):
def __init__(self, train_x, train_y, likelihood, lengthscale_constraint, outputscale_constraint, ard_dims):
super(GP, self).__init__(train_x, train_y, likelihood)
self.ard_dims = ard_dims
self.mean_module = ConstantMean()
base_kernel = MaternKernel(lengthscale_constraint=lengthscale_constraint, ard_num_dims=ard_dims, nu=2.5)
self.covar_module = ScaleKernel(base_kernel, outputscale_constraint=outputscale_constraint)

def forward(self, x):
mean_x = self.mean_module(x)
covar_x = self.covar_module(x)
return MultivariateNormal(mean_x, covar_x)


def train_gp(train_x, train_y, use_ard, num_steps, hypers={}):
"""Fit a GP model where train_x is in [0, 1]^d and train_y is standardized."""
assert train_x.ndim == 2
assert train_y.ndim == 1
assert train_x.shape[0] == train_y.shape[0]

# Create hyper parameter bounds
noise_constraint = Interval(5e-4, 0.2)
if use_ard:
lengthscale_constraint = Interval(0.005, 2.0)
else:
lengthscale_constraint = Interval(0.005, math.sqrt(train_x.shape[1])) # [0.005, sqrt(dim)]
outputscale_constraint = Interval(0.05, 20.0)

# Create models
likelihood = GaussianLikelihood(noise_constraint=noise_constraint).to(device=train_x.device, dtype=train_y.dtype)
ard_dims = train_x.shape[1] if use_ard else None
model = GP(
train_x=train_x,
train_y=train_y,
likelihood=likelihood,
lengthscale_constraint=lengthscale_constraint,
outputscale_constraint=outputscale_constraint,
ard_dims=ard_dims,
).to(device=train_x.device, dtype=train_x.dtype)

# Find optimal model hyperparameters
model.train()
likelihood.train()

# "Loss" for GPs - the marginal log likelihood
mll = ExactMarginalLogLikelihood(likelihood, model)

# Initialize model hypers
if hypers:
model.load_state_dict(hypers)
else:
hypers = {}
hypers["covar_module.outputscale"] = 1.0
hypers["covar_module.base_kernel.lengthscale"] = 0.5
hypers["likelihood.noise"] = 0.005
model.initialize(**hypers)

# Use the adam optimizer
optimizer = torch.optim.Adam([{"params": model.parameters()}], lr=0.1)

for _ in range(num_steps):
optimizer.zero_grad()
output = model(train_x)
loss = -mll(output, train_y)
loss.backward()
optimizer.step()

# Switch to eval mode
model.eval()
likelihood.eval()

return model
Loading

0 comments on commit 8a944cd

Please sign in to comment.