sfepy.solvers.optimize module

class sfepy.solvers.optimize.FMinSteepestDescent(conf, **kwargs)[source]

Steepest descent optimization solver.

Kind: ‘opt.fmin_sd’

For common configuration parameters, see Solver.

Specific configuration parameters:

Parameters:
i_max : int (default: 10)

The maximum number of iterations.

eps_rd : float (default: 1e-05)

The relative delta of the objective function.

eps_of : float (default: 0.0001)

The tolerance for the objective function.

eps_ofg : float (default: 1e-08)

The tolerance for the objective function gradient.

norm : numpy norm (default: inf)

The norm to be used.

ls : bool (default: True)

If True, use a line-search.

ls_method : {‘backtracking’, ‘full’} (default: ‘backtracking’)

The line-search method.

ls_on : float (default: 0.99999)

Start the backtracking line-search by reducing the step, if ||f(x^i)|| / ||f(x^{i-1})|| is larger than ls_on.

ls0 : 0.0 < float < 1.0 (default: 1.0)

The initial step.

ls_red : 0.0 < float < 1.0 (default: 0.5)

The step reduction factor in case of correct residual assembling.

ls_red_warp : 0.0 < float < 1.0 (default: 0.1)

The step reduction factor in case of failed residual assembling (e.g. the “warp violation” error caused by a negative volume element resulting from too large deformations).

ls_min : 0.0 < float < 1.0 (default: 1e-05)

The minimum step reduction factor.

check : 0, 1 or 2 (default: 0)

If >= 1, check the tangent matrix using finite differences. If 2, plot the resulting sparsity patterns.

delta : float (default: 1e-06)

If check >= 1, the finite difference matrix is taken as A_{ij} = \frac{f_i(x_j + \delta) - f_i(x_j - \delta)}{2 \delta}.

output : function

If given, use it instead of output() function.

yscales : list of str (default: [‘linear’, ‘log’, ‘log’, ‘linear’])

The list of four convergence log subplot scales.

log : dict or None

If not None, log the convergence according to the configuration in the following form: {'text' : 'log.txt', 'plot' : 'log.pdf'}. Each of the dict items can be None.

name = ‘opt.fmin_sd’
class sfepy.solvers.optimize.ScipyFMinSolver(conf, **kwargs)[source]

Interface to SciPy optimization solvers scipy.optimize.fmin_*.

Kind: ‘nls.scipy_fmin_like’

For common configuration parameters, see Solver.

Specific configuration parameters:

Parameters:
method : {‘fmin’, ‘fmin_bfgs’, ‘fmin_cg’, ‘fmin_cobyla’, ‘fmin_l_bfgs_b’, ‘fmin_ncg’, ‘fmin_powell’, ‘fmin_slsqp’, ‘fmin_tnc’} (default: ‘fmin’)

The actual optimization method to use.

i_max : int (default: 10)

The maximum number of iterations.

* : *

Additional parameters supported by the method.

name = ‘nls.scipy_fmin_like’
set_method(conf)[source]
sfepy.solvers.optimize.check_gradient(xit, aofg, fn_of, delta, check)[source]
sfepy.solvers.optimize.conv_test(conf, it, of, of0, ofg_norm=None)[source]
Returns:
flag : int
  • -1 … continue
  • 0 … small OF -> stop
  • 1 … i_max reached -> stop
  • 2 … small OFG -> stop
  • 3 … small relative decrase of OF
sfepy.solvers.optimize.wrap_function(function, args)[source]