NonLinear Conjugate Gradient method to minimize function f.
Arguments
- f : string : Name of function. Return a real value
- df : string : Name of f’s derivative. Returns a (R*C) x 1 vector
- args: cell : Arguments passed to f.
- ctl : 5-vec : (Optional) Control variables, described below
Returned values
- x0 : matrix : Local minimum of f
- v : real : Value of f in x0
- nev : 1 x 2 : Number of evaluations of f and of df
Control Variables
- ctl(1) : 1 or 2 : Select stopping criterion amongst :
- ctl(1)==0 : Default value
- ctl(1)==1 : Stopping criterion : Stop search when value doesn’t
improve, as tested by ctl(2) > Deltaf/max(|f(x)|,1)
where Deltaf is the decrease in f observed in the last iteration
(each iteration consists R*C line searches).
- ctl(1)==2 : Stopping criterion : Stop search when updates are small,
as tested by ctl(2) > max { dx(i)/max(|x(i)|,1) | i in 1..N }
where dx is the change in the x that occured in the last iteration.
- ctl(2) : Threshold used in stopping tests. Default=10*eps
- ctl(2)==0 : Default value
- ctl(3) : Position of the minimized argument in args Default=1
- ctl(3)==0 : Default value
- ctl(4) : Maximum number of function evaluations Default=inf
- ctl(4)==0 : Default value
- ctl(5) : Type of optimization:
- ctl(5)==1 : "Fletcher-Reves" method
- ctl(5)==2 : "Polak-Ribiere" (Default)
- ctl(5)==3 : "Hestenes-Stiefel" method
ctl may have length smaller than 4. Default values will be used if ctl is
not passed or if nan values are given.
Example:
function r=df( l ) b=[1;0;-1]; r = -( 2*l{1} - 2*b + rand(size(l{1}))); endfunction
function r=ff( l ) b=[1;0;-1]; r = (l{1}-b)’ * (l{1}-b); endfunction
ll = { [10; 2; 3] };
ctl(5) = 3;
[x0,v,nev]=cg_min( "ff", "df", ll, ctl )
Comment: In general, BFGS method seems to be better performin in many cases but requires more computation per iteration
See also http://en.wikipedia.org/wiki/Nonlinear_conjugate_gradient.
See also: bfgsmin.