is very easy to learn but extremly versatile
provides intelligent optimization algorithms, support for all mayor machinelearning frameworks and many interesting applications
makes optimization data collection simple
visualizes your collected data
saves your computation time
supports parallel computing
Optimization Techniques  Tested and Supported Packages  Optimization Applications 
Local Search:
Global Search: Population Methods: Sequential Methods: 
Machine Learning:
Deep Learning: Parallel Computing: 
Feature Engineering:

The examples above are not necessarily done with realistic datasets or training procedures. The purpose is fast execution of the solution proposal and giving the user ideas for interesting usecases.
Regular training  Hyperactive 

_{ from sklearn.model_selection import cross_val_score from sklearn.tree import DecisionTreeRegressor from sklearn.datasets import load_boston data = load_boston() X, y = data.data, data.target gbr = DecisionTreeRegressor(max_depth=10) score = cross_val_score(gbr, X, y, cv=3).mean() }  ^{ from sklearn.model_selection import cross_val_score from sklearn.tree import DecisionTreeRegressor from sklearn.datasets import load_boston from hyperactive import Hyperactive data = load_boston() X, y = data.data, data.target def model(opt): gbr = DecisionTreeRegressor(max_depth=opt["max_depth"]) return cross_val_score(gbr, X, y, cv=3).mean() search_space = {"max_depth": list(range(3, 25))} hyper = Hyperactive() hyper.add_search(model, search_space, n_iter=50) hyper.run() } 
The most recent version of Hyperactive is available on PyPi:
pip install hyperactive
from sklearn.model_selection import cross_val_score
from sklearn.ensemble import GradientBoostingRegressor
from sklearn.datasets import load_boston
from hyperactive import Hyperactive
data = load_boston()
X, y = data.data, data.target
# define the model in a function
def model(opt):
# pass the suggested parameter to the machine learning model
gbr = GradientBoostingRegressor(
n_estimators=opt["n_estimators"]
)
scores = cross_val_score(gbr, X, y, cv=3)
# return a single numerical value, which gets maximized
return scores.mean()
# search space determines the ranges of parameters you want the optimizer to search through
search_space = {"n_estimators": list(range(10, 200, 5))}
# start the optimization run
hyper = Hyperactive()
hyper.add_search(model, search_space, n_iter=50)
hyper.run()
verbosity = ["progress_bar", "print_results", "print_times"]
distribution = "multiprocessing"
Possible parameter types: (str, dict, callable)
Access the parallel processing in three ways:
Multiprocessing:
def multiprocessing_wrapper(process_func, search_processes_paras, **kwargs):
n_jobs = len(search_processes_paras)
pool = Pool(n_jobs, **kwargs)
results = pool.map(process_func, search_processes_paras)
return results
Joblib:
def joblib_wrapper(process_func, search_processes_paras, **kwargs):
n_jobs = len(search_processes_paras)
jobs = [
delayed(process_func)(**info_dict)
for info_dict in search_processes_paras
]
results = Parallel(n_jobs=n_jobs, **kwargs)(jobs)
return results
n_processes = "auto",
optimizer = "default"
Possible parameter types: ("default", initialized optimizer object)
Instance of optimization class that can be imported from Hyperactive. "default" corresponds to the random search optimizer. The following classes can be imported and used:
Example:
...
opt_hco = HillClimbingOptimizer(epsilon=0.08)
hyper = Hyperactive()
hyper.add_search(..., optimizer=opt_hco)
hyper.run()
...
initialize = {"grid": 4, "random": 2, "vertices": 4}
Possible parameter types: (dict)
The initialization dictionary automatically determines a number of parameters that will be evaluated in the first n iterations (n is the sum of the values in initialize). The initialize keywords are the following:
grid
vertices
random
warm_start
Example:
...
search_space = {
"x1": list(range(10, 150, 5)),
"x2": list(range(2, 12)),
}
ws1 = {"x1": 10, "x2": 2}
ws2 = {"x1": 15, "x2": 10}
hyper = Hyperactive()
hyper.add_search(
model,
search_space,
n_iter=30,
initialize={"grid": 4, "random": 10, "vertices": 4, "warm_start": [ws1, ws2]},
)
hyper.run()
max_score = None
early_stopping=None
(dict, None)
Stops the optimization run early if it did not achive any scoreimprovement within the last iterations. The early_stoppingparameter enables to set three parameters:
n_iter_no_change
: Nonoptional intparameter. This marks the last n iterations to look for an improvement over the iterations that came before n. If the best score of the entire run is within those last n iterations the run will continue (until other stopping criteria are met), otherwise the run will stop.tol_abs
: Optional floatparamter. The score must have improved at least this absolute tolerance in the last n iterations over the best score in the iterations before n. This is an absolute value, so 0.1 means an imporvement of 0.8 > 0.9 is acceptable but 0.81 > 0.9 would stop the run.tol_rel
: Optional floatparamter. The score must have imporved at least this relative tolerance (in percentage) in the last n iterations over the best score in the iterations before n. This is a relative value, so 10 means an imporvement of 0.8 > 0.88 is acceptable but 0.8 > 0.87 would stop the run.random_state = None
Possible parameter types: (int, None)
Random state for random processes in the random, numpy and scipy module.
memory = True
memory_warm_start = None
Possible parameter types: (pandas dataframe, None)
Pandas dataframe that contains score and parameter information that will be automatically loaded into the memorydictionary.
example:
score  x1  x2  x... 
0.756  0.1  0.2  ... 
0.823  0.3  0.1  ... 
...  ...  ...  ... 
...  ...  ...  ... 
progress_board = None
Each iteration consists of two steps:
The objective function has one argument that is often called "para", "params" or "opt". This argument is your access to the parameter set that the optimizer has selected in the corresponding iteration.
def objective_function(opt):
# get x1 and x2 from the argument "opt"
x1 = opt["x1"]
x2 = opt["x1"]
# calculate the score with the parameter set
score = (x1 * x1 + x2 * x2)
# return the score
return score
The objective function always needs a score, which shows how "good" or "bad" the current parameter set is. But you can also return some additional information with a dictionary:
def objective_function(opt):
x1 = opt["x1"]
x2 = opt["x1"]
score = (x1 * x1 + x2 * x2)
other_info = {
"x1 squared" : x1**2,
"x2 squared" : x2**2,
}
return score, other_info
When you take a look at the results (a pandas dataframe with all iteration information) after the run has ended you will see the additional information in it. The reason we need a dictionary for this is because Hyperactive needs to know the names of the additonal parameters. The score does not need that, because it is always called "score" in the results. You can run this example script if you want to give it a try.
The search space defines what values the optimizer can select during the search. These selected values will be inside the objective function argument and can be accessed like in a dictionary. The values in each search space dimension should always be in a list. If you use np.arange you should put it in a list afterwards:
search_space = {
"x1": list(np.arange(100, 101, 1)),
"x2": list(np.arange(100, 101, 1)),
}
A special feature of Hyperactive is shown in the next example. You can put not just numeric values into the search space dimensions, but also strings and functions. This enables a very high flexibility in how you can create your studies.
def func1():
# do stuff
return stuff
def func2():
# do stuff
return stuff
search_space = {
"x": list(np.arange(100, 101, 1)),
"str": ["a string", "another string"],
"function" : [func1, func2],
}
If you want to put other types of variables (like numpy arrays, pandas dataframes, lists, ...) into the search space you can do that via functions:
def array1():
return np.array([0, 1, 2])
def array2():
return np.array([0, 1, 2])
search_space = {
"x": list(np.arange(100, 101, 1)),
"str": ["a string", "another string"],
"numpy_array" : [array1, array2],
}
The functions contain the numpy arrays and returns them. This way you can use them inside the objective function.
Each of the following optimizer classes can be initialized and passed to the "add_search"method via the "optimizer"argument. During this initialization the optimizer class accepts additional paramters. You can read more about each optimizationstrategy and its parameters in the Optimization Tutorial.
The progress board enables the visualization of search data during the optimization run. This will help you to understand what is happening during the optimization and give an overview of the explored parameter sets and scores.
The following script provides an example:
from sklearn.model_selection import cross_val_score
from sklearn.ensemble import GradientBoostingRegressor
from sklearn.datasets import load_boston
from hyperactive import Hyperactive
# import the ProgressBoard
from hyperactive.dashboards import ProgressBoard
data = load_boston()
X, y = data.data, data.target
def model(opt):
gbr = GradientBoostingRegressor(
n_estimators=opt["n_estimators"],
max_depth=opt["max_depth"],
min_samples_split=opt["min_samples_split"],
)
scores = cross_val_score(gbr, X, y, cv=3)
return scores.mean()
search_space = {
"n_estimators": list(range(50, 150, 5)),
"max_depth": list(range(2, 12)),
"min_samples_split": list(range(2, 22)),
}
# create an instance of the ProgressBoard
progress_board = ProgressBoard()
hyper = Hyperactive()
# pass the instance of the ProgressBoard to .add_search(...)
hyper.add_search(
model,
search_space,
n_iter=120,
progress_board=progress_board,
)
# a terminal will open, which opens a dashboard in your browser
hyper.run()
objective_function
returnes: dictionary
Parameter dictionary of the best score of the given objective_function found in the previous optimization run.
example:
{
'x1': 0.2,
'x2': 0.3,
}
objective_function
returns: Pandas dataframe
The dataframe contains score, parameter information, iteration times and evaluation times of the given objective_function found in the previous optimization run.
example:
score  x1  x2  x...  eval_times  iter_times 
0.756  0.1  0.2  ...  0.953  1.123 
0.823  0.3  0.1  ...  0.948  1.101 
...  ...  ...  ...  ...  ... 
...  ...  ...  ...  ...  ... 
 [x] connect two different model/dataset hashes
 [x] split two different model/dataset hashes
 [x] delete memory of model/dataset
 [x] return best known model for dataset
 [x] return search space for best model
 [x] return best parameter for best model
The following algorithms are of my own design and, to my knowledge, do not yet exist in the technical literature. If any of these algorithms already exist I would like you to share it with me in an issue.
A combination between simulated annealing and random search.
The error might be located in the optimizationbackend. Look at the error message from the command line. If one of the last messages look like this:
 File "/.../gradient_free_optimizers/...", line ...
Then you should post the bug report in:
 https://github.com/SimonBlanke/GradientFreeOptimizers
Otherwise you can post the bug report in Hyperactive
Do you have the correct Hyperactive version?
Every major version update (e.g. v2.2 > v3.0) the API of Hyperactive changes. Check which version of Hyperactive you have. If your major version is older you have two options:
Recommended: You could just update your Hyperactive version with:
pip install hyperactive upgrade
This way you can use all the new documentation and examples from the current repository.
Or you could continue using the old version and use an old repository branch as documentation. You can do that by selecting the corresponding branch. (top right of the repository. The default is "master" or "main") So if your major version is older (e.g. v2.1.0) you can select the 2.x.x branch to get the old repository for that version.
This is expected of the current implementation of smboptimizers. For all Sequential model based algorithms you have to keep your eyes on the search space size:
search_space_size = 1
for value_ in search_space.values():
search_space_size *= len(value_)
print("search_space_size", search_space_size)
Reduce the search space size to resolve this error.
Setting distribution to "joblib" may fix this problem:
hyper = Hyperactive(distribution="joblib")
Very often warnings from sklearn or numpy. Those warnings do not correlate with bad performance from Hyperactive. Your code will most likely run fine. Those warnings are very difficult to silence.
Put this at the very top of your script:
def warn(*args, **kwargs):
pass
import warnings
warnings.warn = warn
@Misc{hyperactive2021,
author = {{Simon Blanke}},
title = {{Hyperactive}: An optimization and data collection toolbox for convenient and fast prototyping of computationally expensive models.},
howpublished = {\url{https://github.com/SimonBlanke}},
year = {since 2019}
}