| Home | Trees | Indices | Help |
|
|---|
|
|
|
|||
| KeyedList | |||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
abs = absolute
|
|||
_epsilon = sqrt(scipy.finfo(scipy.float_).eps)
|
|||
Imports: absolute, sqrt, asarray, zeros, mat, transpose, ones, dot, sum, scipy, copy, SloppyCell, KeyedList_mod
|
|||
centred difference formula to approximate the jacobian, given the residual function |
Minimizer for a nonlinear least squares problem. Allowed to
have more residuals than parameters or vice versa.
f : residual function (function of parameters)
fprime : derivative of residual function with respect to parameters.
Should return a matrix (J) with dimensions number of residuals
by number of parameters.
x0 : initial parameter set
avegtol : convergence tolerance on the gradient vector
epsilon : size of steps to use for finite differencing of f (if fprime
not passed in)
maxiter : maximum number of iterations
full_output : 0 to get only the minimum set of parameters back
1 if you also want the best parameter set, the
lowest value of f, the number of function calls,
the number of gradient calls, the convergence flag,
the last Marquardt parameter used (lambda), and the
last evaluation of fprime (J matrix)
disp : 0 for no display, 1 to give cost at each iteration and convergence
conditions at the end
retall : 0 for nothing extra to be returned, 1 for all the parameter
sets during the optimization to be returned
lambdainit : initial value of the Marquardt parameter to use (useful if
continuing from an old optimization run
jinit : initial evaluation of the residual sensitivity matrix (J).
trustradius : set this to the maximum move you want to allow in a single
parameter direction.
If you are using log parameters, then setting this
to 1.0, for example, corresponds to a multiplicative
change of exp(1) = 2.718
|
Minimizer for a nonlinear least squares problem. Allowed to
have more residuals than parameters or vice versa
fcost : the cost function (*not* the residual function)
fjtj : this function must return back an ordered pair, the first entry
is the gradient of the cost and the second entry is the Levenberg
Marquardt (LM) approximation to the cost function.
NOTE: If the cost function = 1/2 * sum(residuals**2) then
the LM approximation is the matrix matrix product J^t J
where J = derivative of residual function with respect to parameters.
However if cost = k*sum(residuals**2) for some constant k, then
the LM approximation is 2*k*J^t J, so beware of this factor!!!
x0 : initial parameter set
avegtol : convergence tolerance on the gradient vector
epsilon : size of steps to use for finite differencing of f (if fprime
not passed in)
maxiter : maximum number of iterations
full_output : 0 to get only the minimum set of parameters back
1 if you also want the best parameter set, the
lowest value of f, the number of function calls,
the number of gradient calls, the convergence flag,
the last Marquardt parameter used (lambda), and the
last evaluation of fprime (J matrix)
disp : 0 for no display, 1 to give cost at each iteration and convergence
conditions at the end
retall : 0 for nothing extra to be returned, 1 for all the parameter
sets during the optimization to be returned
trustradius : set this to the maximum move you want to allow in a single
parameter direction.
If you are using log parameters, then setting this
to 1.0, for example, corresponds to a multiplicative
change of exp(1) = 2.718
This version requires fjtj to pass back an ordered pair with
a gradient evaluation of the cost and JtJ, but not a function for J.
This is important in problems when there is many residuals and J is too
cumbersome to compute and pass around, but JtJ is a lot "slimmer".
|
Minimizer for a nonlinear least squares problem. Allowed to
have more residuals than parameters or vice versa.
f : residual function (function of parameters)
fprime : derivative of residual function with respect to parameters.
Should return a matrix (J) with dimensions number of residuals
by number of parameters.
x0 : initial parameter set
avegtol : convergence tolerance on the gradient vector
epsilon : size of steps to use for finite differencing of f (if fprime
not passed in)
maxiter : maximum number of iterations
full_output : 0 to get only the minimum set of parameters back
1 if you also want the best parameter set, the
lowest value of f, the number of function calls,
the number of gradient calls, the convergence flag,
the last Marquardt parameter used (lambda), and the
last evaluation of fprime (J matrix)
disp : 0 for no display, 1 to give cost at each iteration and convergence
conditions at the end
retall : 0 for nothing extra to be returned, 1 for all the parameter
sets during the optimization to be returned
trustradius : set this to the maximum length of move you want.
If you are using log parameters, then setting this
to 1.0, for example, corresponds to a multiplicative
change of exp(1) = 2.718 if the move is along a single
parameter direction
This version is scale invariant. This means that under a change of
scale of the parameters the direction the optimizer chooses to move
in does not change. To achieve this, we don't use a Marquardt
parameter to impose a trust region but rather take the infinite trust
region step and just cut it back to the length given in the variable
trustradius.
|
| Home | Trees | Indices | Help |
|
|---|
| Generated by Epydoc 3.0.1 on Wed Jun 3 10:15:00 2009 | http://epydoc.sourceforge.net |