This library is a C port of the implementation of Limited-memory Broyden-Fletcher-Goldfarb-Shanno (L-BFGS) method written by Jorge Nocedal. The original FORTRAN source code is available at: http://www.ece.northwestern.edu/~nocedal/lbfgs.html

The L-BFGS method solves the unconstrainted minimization problem,

minimize F(x), x = (x1, x2, ..., xN),

only if the objective function F(x) and its gradient G(x) are computable. The well-known Newton's method requires computation of the inverse of the hessian matrix of the objective function. However, the computational cost for the inverse hessian matrix is expensive especially when the objective function takes a large number of variables. The L-BFGS method iteratively finds a minimizer by approximating the inverse hessian matrix by information from last m iterations. This innovation saves the memory storage and computational time drastically for large-scaled problems.

Among the various ports of L-BFGS, this library provides several features:

**Optimization with L1-norm (Orthant-Wise Limited-memory Quasi-Newton (OWL-QN) method)**: In addition to standard minimization problems, the library can minimize a function F(x) combined with L1-norm |x| of the variables, {F(x) + C |x|}, where C is a constant scalar parameter. This feature is useful for estimating parameters of sparse log-linear models (e.g., logistic regression and maximum entropy) with L1-regularization (or Laplacian prior).**Clean C code**: Unlike C codes generated automatically by f2c (Fortran 77 into C converter), this port includes changes based on my interpretations, improvements, optimizations, and clean-ups so that the ported code would be well-suited for a C code. In addition to comments inherited from the original code, a number of comments were added through my interpretations.**Callback interface**: The library receives function and gradient values via a callback interface. The library also notifies the progress of the optimization by invoking a callback function. In the original implementation, a user had to set function and gradient values every time the function returns for obtaining updated values.**Thread safe**: The library is thread-safe, which is the secondary gain from the callback interface.**Cross platform.**The source code can be compiled on Microsoft Visual Studio 2010, GNU C Compiler (gcc), etc.**Configurable precision**: A user can choose single-precision (float) or double-precision (double) accuracy by changing LBFGS_FLOAT macro.**SSE/SSE2 optimization**: This library includes SSE/SSE2 optimization (written in compiler intrinsics) for vector arithmetic operations on Intel/AMD processors. The library uses SSE for float values and SSE2 for double values. The SSE/SSE2 optimization routine is disabled by default.

This library is used by:

- CRFsuite: A fast implementation of Conditional Random Fields (CRFs)
- Classias: A collection of machine-learning algorithms for classification
- mlegp: an R package for maximum likelihood estimates for Gaussian processes
- imaging2: the imaging2 class library
- Algorithm::LBFGS - Perl extension for L-BFGS
- YAP-LBFGS (an interface to call libLBFGS from YAP Prolog)

libLBFGS is distributed under the term of the MIT license.

- Version 1.10 (2010-12-22):
- Fixed compiling errors on Mac OS X; this patch was kindly submitted by Nic Schraudolph.
- Reduced compiling warnings on Mac OS X; this patch was kindly submitted by Tamas Nepusz.
- Replaced memalign() with posix_memalign().
- Updated solution and project files for Microsoft Visual Studio 2010.

- Version 1.9 (2010-01-29):
- Fixed a mistake in checking the validity of the parameters "ftol" and "wolfe"; this was discovered by Kevin S. Van Horn.

- Version 1.8 (2009-07-13):
- Accepted the patch submitted by Takashi Imamichi; the backtracking method now has three criteria for choosing the step length:
- LBFGS_LINESEARCH_BACKTRACKING_ARMIJO: sufficient decrease (Armijo) condition only
- LBFGS_LINESEARCH_BACKTRACKING_WOLFE: regular Wolfe condition (sufficient decrease condition + curvature condition)
- LBFGS_LINESEARCH_BACKTRACKING_STRONG_WOLFE: strong Wolfe condition

- Updated the documentation to explain the above three criteria.

- Accepted the patch submitted by Takashi Imamichi; the backtracking method now has three criteria for choosing the step length:
- Version 1.7 (2009-02-28):
- Improved OWL-QN routines for stability.
- Removed the support of OWL-QN method in MoreThuente algorithm because it accidentally fails in early stages of iterations for some objectives. Because of this change,
**the OW-LQN method must be used with the backtracking algorithm (LBFGS_LINESEARCH_BACKTRACKING)**, or the library returns LBFGSERR_INVALID_LINESEARCH. - Renamed line search algorithms as follows:
- LBFGS_LINESEARCH_BACKTRACKING: regular Wolfe condition.
- LBFGS_LINESEARCH_BACKTRACKING_LOOSE: regular Wolfe condition.
- LBFGS_LINESEARCH_BACKTRACKING_STRONG: strong Wolfe condition.

- Source code clean-up.

- Version 1.6 (2008-11-02):
- Improved line-search algorithm with strong Wolfe condition, which was contributed by Takashi Imamichi. This routine is now default for LBFGS_LINESEARCH_BACKTRACKING. The previous line search algorithm with regular Wolfe condition is still available as LBFGS_LINESEARCH_BACKTRACKING_LOOSE.
- Configurable stop index for L1-norm computation. A member variable lbfgs_parameter_t::orthantwise_end was added to specify the index number at which the library stops computing the L1 norm of the variables. This is useful to prevent some variables from being regularized by the OW-LQN method.
- A sample program written in C++ (sample/sample.cpp).

- Version 1.5 (2008-07-10):
- Configurable starting index for L1-norm computation. A member variable lbfgs_parameter_t::orthantwise_start was added to specify the index number from which the library computes the L1 norm of the variables. This is useful to prevent some variables from being regularized by the OWL-QN method.
- Fixed a zero-division error when the initial variables have already been a minimizer (reported by Takashi Imamichi). In this case, the library returns LBFGS_ALREADY_MINIMIZED status code.
- Defined LBFGS_SUCCESS status code as zero; removed unused constants, LBFGSFALSE and LBFGSTRUE.
- Fixed a compile error in an implicit down-cast.

- Version 1.4 (2008-04-25):
- Configurable line search algorithms. A member variable lbfgs_parameter_t::linesearch was added to choose either MoreThuente method (LBFGS_LINESEARCH_MORETHUENTE) or backtracking algorithm (LBFGS_LINESEARCH_BACKTRACKING).
- Fixed a bug: the previous version did not compute psuedo-gradients properly in the line search routines for OWL-QN. This bug might quit an iteration process too early when the OWL-QN routine was activated (0 < lbfgs_parameter_t::orthantwise_c).
- Configure script for POSIX environments.
- SSE/SSE2 optimizations with GCC.
- New functions lbfgs_malloc and lbfgs_free to use SSE/SSE2 routines transparently. It is uncessary to use these functions for libLBFGS built without SSE/SSE2 routines; you can still use any memory allocators if SSE/SSE2 routines are disabled in libLBFGS.

- Version 1.3 (2007-12-16):
- An API change. An argument was added to lbfgs() function to receive the final value of the objective function. This argument can be set to
`NULL`

if the final value is unnecessary. - Fixed a null-pointer bug in the sample code (reported by Takashi Imamichi).
- Added build scripts for Microsoft Visual Studio 2005 and GCC.
- Added README file.

- An API change. An argument was added to lbfgs() function to receive the final value of the objective function. This argument can be set to
- Version 1.2 (2007-12-13):
- Fixed a serious bug in orthant-wise L-BFGS. An important variable was used without initialization.

- Version 1.1 (2007-12-01):
- Implemented orthant-wise L-BFGS.
- Implemented lbfgs_parameter_init() function.
- Fixed several bugs.
- API documentation.

- Version 1.0 (2007-09-20):
- Initial release.

#include <stdio.h> #include <lbfgs.h> static lbfgsfloatval_t evaluate( void *instance, const lbfgsfloatval_t *x, lbfgsfloatval_t *g, const int n, const lbfgsfloatval_t step ) { int i; lbfgsfloatval_t fx = 0.0; for (i = 0;i < n;i += 2) { lbfgsfloatval_t t1 = 1.0 - x[i]; lbfgsfloatval_t t2 = 10.0 * (x[i+1] - x[i] * x[i]); g[i+1] = 20.0 * t2; g[i] = -2.0 * (x[i] * g[i+1] + t1); fx += t1 * t1 + t2 * t2; } return fx; } static int progress( void *instance, const lbfgsfloatval_t *x, const lbfgsfloatval_t *g, const lbfgsfloatval_t fx, const lbfgsfloatval_t xnorm, const lbfgsfloatval_t gnorm, const lbfgsfloatval_t step, int n, int k, int ls ) { printf("Iteration %d:\n", k); printf(" fx = %f, x[0] = %f, x[1] = %f\n", fx, x[0], x[1]); printf(" xnorm = %f, gnorm = %f, step = %f\n", xnorm, gnorm, step); printf("\n"); return 0; } #define N 100 int main(int argc, char *argv[]) { int i, ret = 0; lbfgsfloatval_t fx; lbfgsfloatval_t *x = lbfgs_malloc(N); lbfgs_parameter_t param; if (x == NULL) { printf("ERROR: Failed to allocate a memory block for variables.\n"); return 1; } /* Initialize the variables. */ for (i = 0;i < N;i += 2) { x[i] = -1.2; x[i+1] = 1.0; } /* Initialize the parameters for the L-BFGS optimization. */ lbfgs_parameter_init(¶m); /*param.linesearch = LBFGS_LINESEARCH_BACKTRACKING;*/ /* Start the L-BFGS optimization; this will invoke the callback functions evaluate() and progress() when necessary. */ ret = lbfgs(N, x, &fx, evaluate, progress, NULL, ¶m); /* Report the result. */ printf("L-BFGS optimization terminated with status code = %d\n", ret); printf(" fx = %f, x[0] = %f, x[1] = %f\n", fx, x[0], x[1]); lbfgs_free(x); return 0; }

The L-BFGS algorithm is described in:

- Jorge Nocedal. Updating Quasi-Newton Matrices with Limited Storage.
*Mathematics of Computation*, Vol. 35, No. 151, pp. 773--782, 1980. - Dong C. Liu and Jorge Nocedal. On the limited memory BFGS method for large scale optimization.
*Mathematical Programming*B, Vol. 45, No. 3, pp. 503-528, 1989.

The line search algorithms used in this implementation are described in:

- John E. Dennis and Robert B. Schnabel.
*Numerical Methods for Unconstrained Optimization and Nonlinear Equations*, Englewood Cliffs, 1983. - Jorge J. More and David J. Thuente. Line search algorithm with guaranteed sufficient decrease.
*ACM Transactions on Mathematical Software (TOMS)*, Vol. 20, No. 3, pp. 286-307, 1994.

This library also implements Orthant-Wise Limited-memory Quasi-Newton (OWL-QN) method presented in:

- Galen Andrew and Jianfeng Gao. Scalable training of L1-regularized log-linear models. In
*Proceedings of the 24th International Conference on Machine Learning (ICML 2007)*, pp. 33-40, 2007.

Special thanks go to:

- Yoshimasa Tsuruoka and Daisuke Okanohara for technical information about OWL-QN
- Takashi Imamichi for the useful enhancements of the backtracking method
- Kevin S. Van Horn, Nic Schraudolph, and Tamas Nepusz for bug fixes

Finally I would like to thank the original author, Jorge Nocedal, who has been distributing the effieicnt and explanatory implementation in an open source licence.

- L-BFGS by Jorge Nocedal.
- Orthant-Wise Limited-memory Quasi-Newton Optimizer for L1-regularized Objectives by Galen Andrew.
- C port (via f2c) by Taku Kudo.
- C#/C++/Delphi/VisualBasic6 port in ALGLIB.
- Computational Crystallography Toolbox includes scitbx::lbfgs.

Copyright (c) 2002-2010 by Naoaki Okazaki

*Tue Dec 21 01:26:48 2010*