Package SloppyCell :: Module lmopt
[hide private]

Module lmopt

source code

Classes [hide private]
  KeyedList
Functions [hide private]
 
save(obj, filename)
Save an object to a file
source code
 
approx_fprime(xk, f, epsilon, *args) source code
 
approx_fprime1(xk, f, epsilon, *args)
centred difference formula to approximate fprime
source code
 
approx_fprime2(xk, f, epsilon, *args)
centred difference formula to approximate the jacobian, given the residual ...
source code
 
check_grad(func, grad, x0, *args) source code
 
approx_fhess_p(x0, p, fprime, epsilon, *args) source code
 
fmin_lm(f, x0, fprime=None, args=(), avegtol=1e-5, epsilon=_epsilon, maxiter=None, full_output=0, disp=1, retall=0, lambdainit=None, jinit=None, trustradius=1.0)
Minimizer for a nonlinear least squares problem.
source code
 
fmin_lmNoJ(fcost, x0, fjtj, args=(), avegtol=1e-5, epsilon=_epsilon, maxiter=None, full_output=0, disp=1, retall=0, trustradius=1.0)
Minimizer for a nonlinear least squares problem.
source code
 
solve_lmsys(Lambda, s, g, rhsvect, currentcost, n) source code
 
fmin_lm_scale(f, x0, fprime=None, args=(), avegtol=1e-5, epsilon=_epsilon, maxiter=None, full_output=0, disp=1, retall=0, trustradius=1.0)
Minimizer for a nonlinear least squares problem.
source code
Variables [hide private]
  abs = absolute
  _epsilon = sqrt(scipy.finfo(scipy.float_).eps)

Imports: absolute, sqrt, asarray, zeros, mat, transpose, ones, dot, sum, scipy, copy, SloppyCell, KeyedList_mod


Function Details [hide private]

approx_fprime2(xk, f, epsilon, *args)

source code 
centred difference formula to approximate the jacobian, given the residual 
function 

fmin_lm(f, x0, fprime=None, args=(), avegtol=1e-5, epsilon=_epsilon, maxiter=None, full_output=0, disp=1, retall=0, lambdainit=None, jinit=None, trustradius=1.0)

source code 
Minimizer for a nonlinear least squares problem. Allowed to
have more residuals than parameters or vice versa.
f : residual function (function of parameters)
fprime : derivative of residual function with respect to parameters.
         Should return a matrix (J) with dimensions  number of residuals
         by number of parameters.
x0 : initial parameter set
avegtol : convergence tolerance on the gradient vector
epsilon : size of steps to use for finite differencing of f (if fprime
          not passed in)
maxiter : maximum number of iterations 
full_output : 0 to get only the minimum set of parameters back
              1 if you also want the best parameter set, the 
              lowest value of f, the number of function calls, 
              the number of gradient calls, the convergence flag,
              the last Marquardt parameter used (lambda), and the 
              last evaluation of fprime (J matrix)
disp : 0 for no display, 1 to give cost at each iteration and convergence
       conditions at the end
retall : 0 for nothing extra to be returned, 1 for all the parameter 
         sets during the optimization to be returned 
lambdainit : initial value of the Marquardt parameter to use (useful if
             continuing from an old optimization run
jinit : initial evaluation of the residual sensitivity matrix (J).
trustradius : set this to the maximum move you want to allow in a single
              parameter direction. 
              If you are using log parameters, then setting this
              to 1.0, for example, corresponds to a multiplicative 
              change of exp(1) = 2.718

fmin_lmNoJ(fcost, x0, fjtj, args=(), avegtol=1e-5, epsilon=_epsilon, maxiter=None, full_output=0, disp=1, retall=0, trustradius=1.0)

source code 
Minimizer for a nonlinear least squares problem. Allowed to
have more residuals than parameters or vice versa
fcost : the cost function (*not* the residual function)
fjtj : this function must return back an ordered pair, the first entry 
       is the gradient of the cost and the second entry is the Levenberg
       Marquardt (LM) approximation to the cost function. 
       NOTE: If the cost function = 1/2 * sum(residuals**2) then 
       the LM approximation is the matrix matrix product J^t J 
       where J = derivative of residual function with respect to parameters. 
       However if cost = k*sum(residuals**2) for some constant k, then 
       the LM approximation is 2*k*J^t J, so beware of this factor!!!
x0 : initial parameter set
avegtol : convergence tolerance on the gradient vector
epsilon : size of steps to use for finite differencing of f (if fprime
          not passed in)
maxiter : maximum number of iterations 
full_output : 0 to get only the minimum set of parameters back
              1 if you also want the best parameter set, the 
              lowest value of f, the number of function calls, 
              the number of gradient calls, the convergence flag,
              the last Marquardt parameter used (lambda), and the 
              last evaluation of fprime (J matrix)
disp : 0 for no display, 1 to give cost at each iteration and convergence
       conditions at the end
retall : 0 for nothing extra to be returned, 1 for all the parameter 
         sets during the optimization to be returned 
trustradius : set this to the maximum move you want to allow in a single
              parameter direction. 
              If you are using log parameters, then setting this
              to 1.0, for example, corresponds to a multiplicative 
              change of exp(1) = 2.718


This version requires fjtj to pass back an ordered pair with 
a gradient evaluation of the cost and JtJ,  but not a function for J. 
This is important in problems when there is many residuals and J is too 
cumbersome to compute and pass around, but JtJ is a lot "slimmer".  

fmin_lm_scale(f, x0, fprime=None, args=(), avegtol=1e-5, epsilon=_epsilon, maxiter=None, full_output=0, disp=1, retall=0, trustradius=1.0)

source code 

Minimizer for a nonlinear least squares problem. Allowed to
have more residuals than parameters or vice versa. 

f : residual function (function of parameters)
fprime : derivative of residual function with respect to parameters.
         Should return a matrix (J) with dimensions  number of residuals
         by number of parameters.
x0 : initial parameter set
avegtol : convergence tolerance on the gradient vector
epsilon : size of steps to use for finite differencing of f (if fprime
          not passed in)
maxiter : maximum number of iterations 
full_output : 0 to get only the minimum set of parameters back
              1 if you also want the best parameter set, the 
              lowest value of f, the number of function calls, 
              the number of gradient calls, the convergence flag,
              the last Marquardt parameter used (lambda), and the 
              last evaluation of fprime (J matrix)
disp : 0 for no display, 1 to give cost at each iteration and convergence
       conditions at the end
retall : 0 for nothing extra to be returned, 1 for all the parameter 
         sets during the optimization to be returned 
trustradius : set this to the maximum length of move you want. 
              If you are using log parameters, then setting this
              to 1.0, for example, corresponds to a multiplicative 
              change of exp(1) = 2.718 if the move is along a single
              parameter direction

This version is scale invariant. This means that under a change of 
scale of the parameters the direction the optimizer chooses to move
in does not change. To achieve this, we don't use a Marquardt
parameter to impose a trust region but rather take the infinite trust 
region step and just cut it back to the length given in the variable
trustradius.