lmfit - C/C++ library for Levenberg-Marquardt least-squares minimization and curve fitting
#include <lmmin.h>
void lmmin( int n_par, double *par, int m_dat, const void *data, void (*evaluate)(...), const lm_control_struct *control, lm_status_struct *status, void (*printout)(...) );
The user must supply the following callback routines:
void (*evaluate)( const double *par, int m_dat, const void *data, double *fvec, int *info );
void (*printout)( int n_par, const double *par, int m_dat, const void *data, const double *fvec, int printflags, int iflag, int iter, int nfev );
For the latter, a default implementation is available:
void lm_printout_std(...);
Control parameters must be given in form of a record
typedef struct { double ftol; double xtol; double gtol; double epsilon; double stepbound; int maxcall; } lm_control_struct;
Default values:
extern const lm_control_struct lm_control_float, lm_control_double;
Finally, the user must provide a record to receive status information
typedef struct { double fnorm; int nfev; int info; } lm_status_struct;
Status messages, indexed by status.info, are supplied in
extern const char *lm_infmsg[];
#include <lmcurve.h>
void lmcurve_fit( int n_par, double *par, int m_dat, const double *t, const double *y, double (*f)(...), const lm_control_struct *control, lm_status_struct *status);
The user must supply the function
double f( double t, const double *par );
With version 3.0, the application programming interface has changed substantially.
Generic minimization and one-dimensional curve fitting have been separated more clearly, and the interface for the latter has been greatly simplified (see lmcurve.h and demo/curve1.c). Furthermore, control (input) and monitoring variables (output) parameters are now stored in two different records. For better readability of the code, even the innermost function declarations of evaluate and printout have been changed.
Determine a parameter vector par (of dimension n_par) that minimizes the Euclidean norm of a vectorial function fvec(par) (of dimension m_dat >= n_par).
The most important application is curve fitting: to approximate data y(t) by a function f(t;par), minimize the norm of the residual vector fvec = y(t) - f(t;par).
The Levenberg-Marquardt minimization starts with a steepest-descent exploration of the parameter space, and achieves rapid convergence by crossing over into the Newton-Gauss method. For more background, see Moré 1978, Madsen et al. 2004, and comments in the source code.
The minimization routine lm_minimize requires the following arguments:
dimension of parameter vector par;
parameter vector. On input, it must contain a reasonable guess; on output, it contains the solution found to minimize ||fvec||;
dimension of residue vector fvec;
lm_minimize does not care about this pointer; it just forwards it to evaluate and printout;
a routine that calculates the residue vector fvec for given parameter vector par; setting *info to a negative value causes lm_minimize to terminate;
a record holding numeric limits and other auxiliary parameters; it should always be initialized from provided default records to ensure upward compatibility; available default records: lm_control_float, lm_control_double (for fvec computed to single or double precision, respectively);
relative error desired in the sum of squares;
relative error between last two approximations;
orthogonality desired between fvec and its derivs;
step used to calculate the Jacobian;
initial bound to steps in the outer loop;
maximum number of iterations;
OR'ed bits to print (1) status, (2) parameters and norm, (4) residues at end of fit, (8) residues at every step;
status
a record describing the status of the minimization process.
norm of the residue vector fvec;
actual number of iterations;
status of minimization; for explanation print lm_infmsg[status.info];
a routine that can be used to inform about the progress of the minimization (iflag: location of call within lm_minimize, iter: outer loop counter, nfev: number of calls to evaluate); if no monitoring is desired, lm_minimize may be called with printout or control.printflags set to 0.
See application sample demo/curve1.c.
See application sample demo/surface1.c.
Several application samples are provided; they also serve as test suite to ascertain that the fit algorithm overcomes well-known numerical problems:
demo/morobropro.c: m=3, n=2, modified Rosenbrock problem, testing robustness for widely different vectorial components.
demo/powell.c: m=2, n=2, Powell 1970, with singular Jacobian at the solution par=0.
demo/hat.c: m=2, n=1, asymetric mexican hat function ||F(p)||. Fit result depends on starting value - lmfit does not strive to overcome the limitation to local optimisation.
lmfit is ready for use with C or C++ code. The implementation is self-contained; it does not require external libraries.
Main web site: http://joachimwuttke.de/lmfit/
Download location: http://joachimwuttke.de/src/
Installation with the usual sequence (./configure; make; sudo make install). By default, installation goes to /usr/local. Make sure LD_LIBRARY_PATH contains /usr/local/lib. In some Linux distributions, it is necessary to run sudo ldconfig to register the library.
After installation, this documentation is available through man lmfit.
There is no mechanism to impose constraints within the Levenberg-Marquardt algorithm.
According to my experience, no such mechanism is needed. Constraints can be imposed by variable transform or by adding a penalty to the sum of squares. Variable transform seems to be the better solution. In the above examples: use p0^2 and 10*tanh(p1) instead of p0 and p1.
If you think your problem cannot be handled in such a way, I would be interested to learn why. Please send me one data set (plain ASCII, two columns, blank separated) along with the fit function and a brief explanation of the application context.
The problem is only well posed if the covariance matrix of the input data is known. In this case, the error propagation towards the output parameters can be calculated in linear approximation (http://en.wikipedia.org/wiki/Linear_least_squares). Note that fit parameters are correlated with each other even if the input covariance matrix is diagonal.
In linear approximation, the output covariance matrix depends mainly on the Jacobian of the fit function (evaluated for all data points) versus the fit parameters (at their optimum values). It seems not advisable to use the Jacobian fjac that is calculated in the beginning of the main iteration in lm_lmdif(...), as it is only returned after some transformations.
I would be glad to include code for the calculation of parameter covariances in this distribution; contributions would be highly welcome.
If fit results are robust, it does not matter by which implementation they have been obtained. If the results are not robust, they should not be published anyway. Therefore, in publishing fit results obtained with lmfit it is generally not necessary to cite the software.
However, in methodological publications that involve usage of lmfit a citation might be in order. The preferred form is:
Joachim Wuttke: lmfit - a C/C++ routine for Levenberg-Marquardt minimization with wrapper for least-squares curve fitting, based on work by B. S. Garbow, K. E. Hillstrom, J. J. Moré, and S. Moshier. Version <..>, retrieved on <..> from http://joachimwuttke.de/lmfit/.
K Levenberg: A method for the solution of certain nonlinear problems in least squares. Quart. Appl. Math. 2, 164-168 (1944).
D W Marquardt: An algorithm for least squares estimation of nonlinear parameters. SIAM J. Appl. Math. 11, 431-441 (1963).
J M Moré: The Levenberg-Marquardt algorithm: Implementation and theory. Lect. Notes Math. 630, 105-116 (1978).
K Madsen, H B Nielsen, O Tingleff: Methods for non-linear least squares problems. http://www.imm.dtu.dk/pubdb/views/edoc_download.php/3215/pdf/imm3215.pdf (2004).
Joachim Wuttke <j.wuttke@fz-juelich.de>
Copyright (C) 2009-13 Joachim Wuttke.
Software: FreeBSD License
Documentation: Creative Commons Attribution Share Alike.