Next: samin, Previous: octave_sqp, Up: Scalar optimization [Index]
A simulated annealing (stochastic) optimizer, changing all parameters at once in a single step, so being suitable for non-bound constraints.
No gradient or hessian of the objective function is used. The settings
MaxIter
, fract_prec
, TolFun
, TolX
, and
max_fract_change
are not honoured.
Accepts the additional settings T_init
(initial temperature,
default 0.01), T_min
(final temperature, default 1.0e-5),
mu_T
(factor of temperature decrease, default 1.005),
iters_fixed_T
(iterations within one temperature step, default
10), max_rand_step
(column vector or structure-based
configuration of maximum random steps for each parameter, default 0.005
* pin), stoch_regain_constr
(if true
, regain
constraints after a random step, otherwise take new random value until
constraints are met, default false
), trace_steps
(set
field trace
of outp with a matrix with a row for each step,
first column iteration number, second column repeat number within
iteration, third column value of objective function, rest columns
parameter values, default false
), and siman_log
(set field
log
of outp with a matrix with a row for each iteration,
first column temperature, second column value of objective function,
rest columns numbers of tries with decrease, no decrease but accepted,
and no decrease and rejected.
Steps with increase diff
of objective function are accepted if
rand (1) < exp (- diff / T)
, where T
is the temperature of
the current iteration.
If regaining of constraints failed, optimization will be aborted and
returned value of cvg will be 0
. Otherwise, cvg will
be 1
. Returned structure outp, additionally to the possible
fields trace
and log
described above, will have the fields
niter
and user_interaction
.
Interpretation of Display
: if set to "iter"
, an
informational line is printed after each iteration.
If parallel_local
is equivalent to true
, the objective
function is evaluated for several parameter combinations in parallel. If
parallel_local
is set to an integer > 1
, this is the
maximal number of parallel processes; if it is <= 1
, the maximal
number will be the number of available processor cores. The course of
optimization won’t be changed by parallelization, provided the random
number generator starts with the same state. To achieve this, some of
the parallel results are discarded, causing the speedup to be smaller if
the rate of acceptance of results is high. Also, due to overhead, there
won’t be any speedup, but even a slowdown, if the objective function is
not computationally extensive enough.
Honours options save_state
and recover_state
, described
for the frontend.
Next: samin, Previous: octave_sqp, Up: Scalar optimization [Index]