Next: lm_feasible, Up: Scalar optimization [Index]
Frontend for nonlinear minimization of a scalar objective function.
The functions supplied by the user have a minimal interface; any additionally needed constants can be supplied by wrapping the user functions into anonymous functions.
The following description applies to usage with vector-based parameter handling. Differences in usage for structure-based parameter handling will be explained separately.
f: objective function. It gets a column vector of real parameters as argument. In gradient determination, this function may be called with an informational second argument (if the function accepts it), whose content depends on the function for gradient determination.
pin: real column vector of initial parameters.
settings: structure whose fields stand for optional settings
referred to below. The fields can be set by optimset()
.
The returned values are the column vector of final parameters
p, the final value of the objective function objf, an
integer cvg indicating if and how optimization succeeded or
failed, and a structure outp with additional information,
curently with possible fields: niter
, the number of
iterations, nobjf
, the number of objective function calls
(indirect calls by gradient function not counted), lambda
, the
lambda of constraints at the result, and user_interaction
,
information on user stops (see settings). The backend may define
additional fields. cvg is greater than zero for success and
less than or equal to zero for failure; its possible values depend on
the used backend and currently can be 0
(maximum number of
iterations exceeded), 1
(success without further specification
of criteria), 2
(parameter change less than specified
precision in two consecutive iterations), 3
(improvement in
objective function less than specified), -1
(algorithm aborted
by a user function), or -4
(algorithm got stuck).
The fields of the settings structure can be set with (octave)optimset.
For settings common to all frontends (including these for statistics) see Common frontend options.
For additional settings common to all optimization frontends see Common optimization options.
Algorithm : | "lm_feasible" |
objf_grad
Function computing the gradient of the objective function with respect
to the parameters. Default: real finite differences. Will be called
with the column vector of parameters and, if it accepts it, an
informational structure as arguments. If objf_grad
was specified
by the user, the informational structure has the fields f
: value
of objective function for current parameters, fixed
: logical
vector indicating which parameters are not optimized, so these partial
derivatives need not be computed and can be set to zero, diffp
,
diff_onesided
, lbound
, ubound
: identical to the
user settings of this name, plabels
: 1-dimensional cell-array of
column-cell-arrays, each column with labels for all parameters; the
first column contains the numerical indices of the parameters; the
second and third columns, present for structure based parameter
handling,
see
Parameter structures,
contain the names of the parameters and the subindices of
the parameters,
see
Non-scalar parameters, respectively. The default
gradient function will call the objective function with the second
argument set with fields f
: as the f
passed to the
gradient function, plabels
: cell-array of 1x1 cell-arrays with
the entries of the column-cell-arrays of plabels
as passed to the
jacobian function corresponding to current parameter, side
:
0
for one-sided interval, 1
or 2
, respectively, for
the sides of a two-sided interval, and parallel
: logical scalar
indicating parallel computation of partial derivatives. This
information can be useful if the model function can omit some
computations depending on the currently computed partial derivative.
objf_hessian
Function computing the Hessian of the objective function with respect to the parameters. The default is backend specific. Will be called with the column vector of parameters as argument.
inverse_hessian
Logical scalar, indicating whether the Hessian function passed by the user actually returns the inverse of the Hessian.
complex_step_derivative_objf
Logical scalar, default: false
. Estimate gradient of objective
function with complex step derivative approximation. Use only if you
know that your objective function is suitable for this. No user
function for the gradient (objf_grad
) must be specified.
save_state
String with path to a file which will be created for periodical saving
of the state of optimization. Useful for very long optimizations which
might get interrupted. The format of the saved state will be
backend-specific. Currently, only the "siman"
backend honours
this option. Default: empty string, meaning no saving of state.
recover_state
String with path to a file created due to option save_state
,
which is used to recover a saved state before starting optimization.
Default: empty string, meaning no recovering of state.
Please see Parameter structures.
Please see Scalar optimization and choose backend from menu.
Next: lm_feasible, Up: Scalar optimization [Index]