# Local asynchronous evaluator¶

In this example, we shall see how to use the local async evaluator. This is particularly useful when the cost function takes a long time to evaluate, and sometimes with uncertain evaluation times. In such situations, a better way to parallelize evaluations is to run as individual jobs on the evaluation platform (local machine) and keep track of the completion of the jobs through the process id in the process table. This approach naturally supports asynchronous bayesian update, i.e., update the iteration without completing all the jobs per iteration. The un-assimilated jobs will be used for the update in the subsequent iterations.

To achieve this, the framework creates individual job directories for each cost function evaluation. It also provides an interface for the user to set up these directories for a self-contained evaluation. The cost function needs to write out a file with the result(cost value) which will later be parsed by the framework upon execution completion. For this, the user needs to provide functions that

1. generate necessary files in the folder for function evaluation
2. provide the command for executing the cost function, and
3. parse the result file generated by the cost function

In the rest of the document, we shall see how to set up a sample cost function, a folder generator, run command and result parser. Once again, we shall use the same parabolic cost function for easy understanding, but in 2 dimensions.

## Out-of-script cost function¶

The first step to performing out-of-script evaluation is to re-define the cost function. While a generic cost function in an optimization framework is like a python function that returns the cost function value, out-of-script functions need to be defined differently. Firstly, they do not have any defined arguments and secondly, such out-of-script cost function also cannot directly pass the function value to the optimization framework. While there are several ways to overcome these difficulties, this framework requires the following template to be followed:

• Input: the cost function can either take inline arguments or read from file
• Output: the cost function should write to a standard file

For example, the following code evaluates the parabola in 2 dimensions and writes to file result.txt. Location of evaluation are passed as inline arguments.

import sys
import numpy as np
import time
import examples.examples_all_functions as exf

# read the command line arguments into an array
xs = np.asarray([float(x) for x in sys.argv[1:]])

# evaluate the cost, i.e. the parabola
cost = exf.parabolic_cost_function(x=xs)

# In order to demonstrate uncertain evaluation times, we shall use a random sleep in each cost function.
# In this case, each function evaluation can take between 0-10 sec
time.sleep(np.random.random() * 10.)

# Write into a result file. Note that this script is evaluated in its respective folder and so
# the result.txt file will be in the generated folder and not the home directory of running example4.py
with open('result.txt', 'w') as f:
f.write(str(cost) + '\n')


## Folder generator¶

Many common simulations require not just the location of evaluation but also several other systems to be in place for proper working. For example, many finite element simulations require a geometry mesh file that represent the domain of simulation. The optimizer calls this function, job_generator() with two arguments – the folder of evaluation (more to come on this) and the location x at which the cost function is executed.

def folder_generator(directory, x) -> None:
"""
prepares a given folder for performing the simulations. The cost function (out-of-script) will be executed
in this directory for location x. Typically this involves writing a config file, generating/copying meshes and

In our example, we are running a simple case and so does not require any files to be filled. We shall pass the
location of cost function as a command line argument
"""
with open(os.path.join(directory, 'config.txt'), 'w') as f:
pass  # write file
pass


## Run command¶

Just like a folder of files for execution, the user may need to provide command line arguments to the cost function during execution. To achieve this, the optimizer calls the function run_cmd_generator() with the folder of evaluation and location x at which the cost function has to be evaluated. Thus it can allow change of run-time arguments based on the evaluation point.

def run_cmd(directory, x) -> List[Any]:
"""
Command to run on local machine to get the value of cost function at x, in directory.
In this example, we shall run the script example3_evaluator.py with the location as an argument.
"""
eval_path = os.path.join(os.path.dirname(os.path.abspath(__file__)), 'example3_evaluator.py')
return [sys.executable, eval_path] + list(x[:])


## Result parser¶

Once the cost function writes the cost value into a file, the result parser is supposed to read and return that cost value to the optimizer. Some local post processing operations can go into this function. Care should be taken to return only float values, otherwise it can lead to Type inconsistencies in the optimization routine. The function signature is the same as that for run_cmd_generator() and job_generator()

def result_parser(directory, x) -> float:
"""
Parses the result from a file and returns the cost function.
The file is written be the actual cost function. One can also do post processing in this function and return the
subsequent value. Based on the construct of our cost function example3_evaluator.py, the generated result.txt
will be in this 'directory'
"""
with open(os.path.join(directory, 'result.txt'), 'r') as f:


## Asynchronous optimization¶

Once the above functions are created, the only new procedure to use asynchronous evaluations is setting up the evaluator. This requires passing in the the three functions, namely, job_generator(), run_cmd_generator() and parse_result(). Along with these, it is optional to pass in the location of function evaluations. The evaluator creates separate folders in these directory (relative path) for each cost function evaluation. Each cost function call is assigned a (randomly named) directory within this specified jobs_dir where the run_cmd (from run_cmd_generator()) is called.

evaluator = AsyncLocalEvaluator(job_generator=folder_generator,
run_cmd_generator=run_cmd,
parse_result=result_parser,
required_fraction=0.5, jobs_dir=os.path.join(os.getcwd(), 'temp/opt_jobs'))


Since we are using multiple optima per iteration, we can take advantage of it deploy simultaneous exploration and exploitation in the acquisition function. For example, the following code creates a list of two functions – one exploratory (kappa = 1000) and another exploitatory (kappa = 0.1). This list is then passed to the optimizer, like in the previous examples.

n_opt = 2
my_kappa_funcs = []
my_kappa_funcs.append(lambda iter_num: 1000)            # exploration
my_kappa_funcs.append(lambda iter_num: 0.1)             # exploitation


One can get more crafty in designing these kappa strategies and create a so-called annealing kappa, one that starts with a large value and eventually reduces to a small value, at different rates.

for j in range(n_opt):
my_kappa_funcs.append(lambda curr_iter_num, freq=10.*(j*j+2), t_const=0.8/(1. + j):
user_defined_kappa(curr_iter_num, freq=freq, t_const=t_const))


The remaining part of the optimization remains the same, except for the initialization of BayesOpt object.

b_opt = BayesOpt(cost_function=evaluator,
n_dim=n_dim, n_opt=n_opt, n_init=2,
u_bound=u_bound, l_bound=l_bound,
kern_function='matern_52',
acq_func='LCB', kappa_strategy=my_kappa_funcs,
if_restart=False)

for curr_iter in range(iter_max):
b_opt.update_iter()
if not curr_iter % 2:
b_opt.estimate_best_kernel_parameters(theta_bounds=[[0.01, 10]])
exf.visualize_fit(b_opt)