SG++
sgpp::optimization::optimizer Namespace Reference

## Classes

Newton method with adaptive step size. More...

class  AugmentedLagrangian
Augmented Lagrangian method for constrained optimization. More...

class  BFGS
BFGS method for unconstrained optimization. More...

class  CMAES

class  ConstrainedOptimizer
Abstract class for solving constrained optimization problems. More...

class  DifferentialEvolution

Gradient-based method of steepest descent. More...

class  LeastSquaresOptimizer
Abstract class for solving non-linear least squares problems. More...

class  LevenbergMarquardt
Levenberg-Marquardt algorithm for least squares optimization. More...

class  LogBarrier
Log Barrier method for constrained optimization. More...

class  MultiStart
Meta optimization algorithm calling local algorithm multiple times. More...

class  Newton

class  NLCG

class  Rprop
Rprop method for unconstrained optimization. More...

class  SquaredPenalty
Squared Penalty method for constrained optimization. More...

class  UnconstrainedOptimizer
Abstract class for optimizing objective functions. More...

## Functions

bool lineSearchArmijo (ScalarFunction &f, double beta, double gamma, double tol, double eps, const base::DataVector &x, double fx, base::DataVector &gradFx, const base::DataVector &s, base::DataVector &y, size_t &evalCounter)
Line search (1D optimization on a line) with Armijo's rule used in gradient-based optimization. More...

## Detailed Description

See Module sgpp::optimization for more details.

## Function Documentation

 bool sgpp::optimization::optimizer::lineSearchArmijo ( ScalarFunction & f, double beta, double gamma, double tol, double eps, const base::DataVector & x, double fx, base::DataVector & gradFx, const base::DataVector & s, base::DataVector & y, size_t & evalCounter )
inline

Line search (1D optimization on a line) with Armijo's rule used in gradient-based optimization.

Armijo's rule calculates $$\sigma = \beta^k$$ for $$k = 0, 1, \dotsc$$ for a fixed $$\beta \in (0, 1)$$ and checks if $$\vec{y} = \vec{x} + \sigma\vec{s}$$ lies in $$[0, 1]^d$$ and whether the objective function value improvement meets the condition $$f(\vec{x}) - f(\vec{y}) \ge \gamma\sigma (-\nabla f(\vec{x}) \cdot \vec{s})$$ for $$\gamma \in (0, 1)$$ fixed.

The return value states whether the relative improvement (depending on two tolerances) is big enough to continue the optimization algorithm.

Parameters
 f objective function beta $$\beta \in (0, 1)$$ gamma $$\gamma \in (0, 1)$$ tol tolerance 1 (positive) eps tolerance 2 (positive) x point to start the line search in fx objective function value in x gradFx objective function gradient in x s search direction (should be normalized) [out] y new point, must have the same size as x before calling this function [in,out] evalCounter this variable will be increased by the number of evaluations of f while searching
Returns
whether the new point reaches an acceptable improvement