Sponsored Links
-->

Wednesday, January 3, 2018

Predicting clinical outcomes from large scale cancer genomic ...
src: media.springernature.com

In machine learning, hyperparameter optimization or tuning is the problem of choosing a set of optimal hyperparameters for a learning algorithm.

The same kind of machine learning model can require different constraints, weights or learning rates to generalize different data patterns. These measures are called hyperparameters, and have to be tuned so that the model can optimally solve the machine learning problem. Hyperparameter optimization finds a tuple of hyperparameters that yields an optimal model which minimizes a predefined loss function on given independent data. The objective function takes a tuple of hyperparameters and returns the associated loss. Cross-validation is often used to estimate this generalization performance.


Video Hyperparameter optimization



Approaches

Grid search

The traditional way of performing hyperparameter optimization has been grid search, or a parameter sweep, which is simply an exhaustive searching through a manually specified subset of the hyperparameter space of a learning algorithm. A grid search algorithm must be guided by some performance metric, typically measured by cross-validation on the training set or evaluation on a held-out validation set.

Since the parameter space of a machine learner may include real-valued or unbounded value spaces for certain parameters, manually set bounds and discretization may be necessary before applying grid search.

For example, a typical soft-margin SVM classifier equipped with an RBF kernel has at least two hyperparameters that need to be tuned for good performance on unseen data: a regularization constant C and a kernel hyperparameter ?. Both parameters are continuous, so to perform grid search, one selects a finite set of "reasonable" values for each, say

C ? { 10 , 100 , 1000 } {\displaystyle C\in \{10,100,1000\}}
? ? { 0.1 , 0.2 , 0.5 , 1.0 } {\displaystyle \gamma \in \{0.1,0.2,0.5,1.0\}}

Grid search then trains an SVM with each pair (C, ?) in the Cartesian product of these two sets and evaluates their performance on a held-out validation set (or by internal cross-validation on the training set, in which case multiple SVMs are trained per pair). Finally, the grid search algorithm outputs the settings that achieved the highest score in the validation procedure.

Grid search suffers from the curse of dimensionality, but is often embarrassingly parallel because typically the hyperparameter settings it evaluates are independent of each other.

Random search

Since grid searching is an exhaustive and therefore potentially expensive method, several alternatives have been proposed. In particular, a randomized search that simply samples parameter settings a fixed number of times has been found to be more effective in high-dimensional spaces than exhaustive search. This is because oftentimes, it turns out some hyperparameters do not significantly affect the loss. Therefore, having randomly dispersed data gives more "textured" data than an exhaustive search over parameters that ultimately do not affect the loss.

Bayesian optimization

Bayesian optimization is a methodology for the global optimization of noisy black-box functions. Applied to hyperparameter optimization, Bayesian optimization consists of developing a statistical model of the function from hyperparameter values to the objective evaluated on a validation set. Intuitively, the methodology assumes that there is some smooth but noisy function that acts as a mapping from hyperparameters to the objective. In Bayesian optimization, one aims to gather observations in such a manner as to evaluate the machine learning model the least number of times while revealing as much information as possible about this function and, in particular, the location of the optimum. Bayesian optimization relies on assuming a very general prior over functions which when combined with observed hyperparameter values and corresponding outputs yields a distribution over functions. The methodology proceeds by iteratively picking hyperparameters to observe (experiments to run) in a manner that trades off exploration (hyperparameters for which the outcome is most uncertain) and exploitation (hyperparameters which are expected to have a good outcome). In practice, Bayesian optimization has been shown to obtain better results in fewer experiments than grid search and random search, due to the ability to reason about the quality of experiments before they are run.

Gradient-based optimization

For specific learning algorithms, it is possible to compute the gradient with respect to hyperparameters and then optimize the hyperparameters using gradient descent. The first usage of these techniques was focused on neural networks. Since then, these methods have been extended to other models such as support vector machines or logistic regression.

A different approach in order to obtain a gradient with respect to hyperparameters consists in differentiating the steps of an iterative optimization algorithm using automatic differentiation.

Evolutionary optimization

Evolutionary optimization is a methodology for the global optimization of noisy black-box functions. In hyperparameter optimization, evolutionary optimization uses evolutionary algorithms to search the space of hyperparameters for a given algorithm. Evolutionary hyperparameter optimization follows a process inspired by the biological concept of evolution:

  1. Create an initial population of random solutions (i.e., randomly generate tuples of hyperparameters, typically 100+)
  2. Evaluate the hyperparameters tuples and acquire their fitness function (e.g., 10-fold cross-validation accuracy of the machine learning algorithm with those hyperparameters)
  3. Rank the hyperparameter tuples by their relative fitness
  4. Replace the worst-performing hyperparameter tuples with new hyperparameter tuples generated through crossover and mutation
  5. Repeat steps 2-4 until satisfactory algorithm performance is reached or algorithm performance is no longer improving

Evolutionary optimization has been used in hyperparameter optimization for statistical machine learning algorithms, automated machine learning, deep neural network architecture search, as well as training of the weights in deep neural networks.

Others

RBF and spectral approaches have also been developed.


Maps Hyperparameter optimization



Software

Grid search

  • LIBSVM comes with scripts for performing grid search.
  • scikit-learn is a Python package which includes grid search.

Random search

  • hyperopt and hyperopt-sklearn are Python packages which include random search.
  • scikit-learn is a Python package which includes random search.
  • H2O AutoML provides automated data preparation, hyperparameter tuning via random search, and stacked ensembles in a distributed machine learning platform.

Bayesian

  • spearmint Spearmint is a package to perform Bayesian optimization of machine learning algorithms.
  • Bayesopt, an efficient implementation of Bayesian optimization in C/C++ with support for Python, Matlab and Octave.
  • MOE MOE is a Python/C++/CUDA library implementing Bayesian Global Optimization using Gaussian Processes.
  • Auto-WEKA is a Bayesian hyperparameter optimization layer on top of WEKA.
  • Auto-sklearn is a Bayesian hyperparameter optimization layer on top of scikit-learn.

Gradient based

  • hypergrad is a Python package for differentiation with respect to hyperparameters.

Evolutionary

  • TPOT is a Python package that automatically creates and optimizes full machine learning pipelines using genetic programming.
  • devol is a Python package that performs Deep Neural Network architecture search using genetic programming.

Other

  • hyperopt and hyperopt-sklearn are Python packages which include Tree of Parzen Estimators based distributed hyperparameter optimization.
  • pycma is a Python implementation of Covariance Matrix Adaptation Evolution Strategy.
  • SUMO-Toolbox is a MATLAB toolbox for surrogate modeling supporting a wide collection of hyperparameter optimization algorithm for many model types.
  • rbfopt is a Python package that uses a radial basis function model
  • Harmonica is a Python package for spectral hyperparameter optimization.

Multiple

  • mlr is a R package that contains a large number of different hyperparameter optimization techniques.

Moritz Neeb - Bayesian Optimization and it's application to Neural ...
src: i.ytimg.com


See also

  • Automated machine learning (AutoML)
  • Bias-variance dilemma
  • Dimensionality reduction
  • Feature selection
  • Meta-optimization
  • Model selection
  • Self-tuning

CS : Designing, Visualizing and Understanding Deep Neural Networks ...
src: slideplayer.com


References

Source of article : Wikipedia