Optimization Toolbox    

Solve the constrained linear least-squares problem

where C, A, and Aeq are matrices and d, b, beq, lb, ub, and x are vectors.



x = lsqlin(C,d,A,b) solves the linear system C*x=d in the least-squares sense subject to A*x<=b, where C is m-by-n.

x = lsqlin(C,d,A,b,Aeq,beq) solves the problem above while additionally satisfying the equality constraints Aeq*x = beq. Set A=[] and b=[] if no inequalities exist.

x = lsqlin(C,d,A,b,Aeq,beq,lb,ub) defines a set of lower and upper bounds on the design variables, x, so that the solution is always in the range lb <= x <= ub. Set Aeq=[] and beq=[] if no equalities exist.

x = lsqlin(C,d,A,b,Aeq,beq,lb,ub,x0) sets the starting point to x0. Set lb=[] and b=[] if no bounds exist.

x = lsqlin(C,d,A,b,Aeq,beq,lb,ub,x0,options) minimizes with the optimization parameters specified in the structure options. Use optimset to set these parameters.

x = lsqlin(C,d,A,b,Aeq,beq,lb,ub,x0,options,p1,p2,...) passes the problem-dependent parameters p1,p2,... directly to the Jacobian multiply function if it exists. Specify the Jacobian multiply function using the JacobMult options parameter.

[x,resnorm] = lsqlin(...) returns the value of the squared 2-norm of the residual, norm(C*x-d)^2.

[x,resnorm,residual] = lsqlin(...) returns the residual, C*x-d.

[x,resnorm,residual,exitflag] = lsqlin(...) returns a value exitflag that describes the exit condition.

[x,resnorm,residual,exitflag,output] = lsqlin(...) returns a structure output that contains information about the optimization.

[x,resnorm,residual,exitflag,output,lambda] = lsqlin(...) returns a structure lambda whose fields contain the Lagrange multipliers at the solution x.

Input Arguments

Function Arguments contains general descriptions of arguments passed in to lsqlin. Options provides the function-specific options parameters details.

Output Arguments

Function Arguments contains general descriptions of arguments returned by lsqlin. This section provides function-specific details for exitflag, lambda, and output:


Describes the exit condition:

> 0
The function converged to a solution x.

The maximum number of function evaluations or iterations was exceeded.

< 0
The function did not converge to a solution.

Structure containing the Lagrange multipliers at the solution x (separated by constraint type). The fields are:

Lower bounds lb

Upper bounds ub

Linear inequalities

Linear equalities
Structure containing information about the optimization. The fields are:

Number of iterations taken

Algorithm used

Number of PCG iterations (large-scale algorithm only)

Measure of first-order optimality (large-scale algorithm only)
For large-scale bound constrained problems, the first-order optimality is the infinity norm of v.*g, where v is defined as in Box Constraints, and g is the gradient g = CTCx + CTd (see Nonlinear Least-Squares).


Optimization options parameters used by lsqlin. You can set or change the values of these parameters using the optimset function. Some parameters apply to all algorithms, some are only relevant when using the large-scale algorithm, and others are only relevant when using the medium-scale algorithm. SeeOptimization Parameters for detailed information.

We start by describing the LargeScale option since it states a preference for which algorithm to use. It is only a preference since certain conditions must be met to use the large-scale algorithm. For lsqlin, when the problem has only upper and lower bounds, i.e., no linear inequalities or equalities are specified, the default algorithm is the large-scale method. Otherwise the medium-scale algorithm is used:

Use large-scale algorithm if possible when set to 'on'. Use medium-scale algorithm when set to 'off'.

Medium-Scale and Large-Scale Algorithms.   These parameters are used by both the medium-scale and large-scale algorithms:

Print diagnostic information about the function to be minimized.
Level of display. 'off' displays no output; 'iter' displays output at each iteration; 'final' (default) displays just the final output.
Maximum number of iterations allowed.

Large-Scale Algorithm Only.   These parameters are used only by the large-scale algorithm:

Function handle for Jacobian multiply function. For large-scale structured problems, this function computes the Jacobian matrix products J*Y, J'*Y, or J'*(J*Y) without actually forming J. The function is of the form
  • W = jmfun(Jinfo,Y,flag,p1,p2,...)
where Jinfo and the additional parameters p1,p2,... contain the matrices used to compute J*Y (or J'*Y, or J'*(J*Y)). Jinfo is the same as the first argument of lsqlin and p1,p2,... are the same additional parameters that are passed to lsqlin.
  • lsqlin(Jinfo,...,options,p1,p2,...)
Y is a matrix that has the same number of rows as there are dimensions in the problem. flag determines which product to compute. If flag == 0 then W = J'*(J*Y). If flag > 0 then W = J*Y. If flag < 0 then W = J'*Y. In each case, J is not formed explicitly. lsqlin uses Jinfo to compute the preconditioner.
See Quadratic Minimization with a Dense but Structured Hessian for a related example.
Maximum number of PCG (preconditioned conjugate gradient) iterations (see the Algorithm section below).
Upper bandwidth of preconditioner for PCG. By default, diagonal preconditioning is used (upper bandwidth of 0). For some problems, increasing the bandwidth reduces the number of PCG iterations.
Termination tolerance on the function value.
Termination tolerance on the PCG iteration.
Typical x values.


Find the least-squares solution to the over-determined system subject to and .

First, enter the coefficient matrices and the lower and upper bounds.

Next, call the constrained linear least-squares routine.

Entering x, lambda.ineqlin, lambda.lower, lambda.upper gets

Nonzero elements of the vectors in the fields of lambda indicate active constraints at the solution. In this case, the second inequality constraint (in lambda.ineqlin) and the first lower and second lower bound constraints (in lambda.lower) are active constraints (i.e., the solution is on their constraint boundaries).


For problems with no constraints, \ should be used. For example, x= A\b.

Since the problem being solved is always convex, lsqlin will find a global, although not necessarily unique, solution.

Better numerical results are likely if you specify equalities explicitly using Aeq and beq, instead of implicitly using lb and ub.

Large-Scale Optimization.   If x0 is not strictly feasible, lsqlin chooses a new strictly feasible (centered) starting point.

If components of x have no upper (or lower) bounds, then lsqlin prefers that the corresponding components of ub (or lb) be set to Inf (or -Inf for lb) as opposed to an arbitrary but very large positive (or negative in the case of lower bounds) number.


Large-Scale Optimization.   When the problem given to lsqlin has only upper and lower bounds, i.e., no linear inequalities or equalities are specified, and the matrix C has at least as many rows as columns, the default algorithm is the large-scale method. This method is a subspace trust-region method based on the interior-reflective Newton method described in [1]. Each iteration involves the approximate solution of a large linear system using the method of preconditioned conjugate gradients (PCG). See Trust-Region Methods for Nonlinear Minimization and Preconditioned Conjugate Gradients.

Medium-Scale Optimization.   lsqlin, with the LargeScale parameter set to 'off' with optimset, or when linear inequalities or equalities are given, is based on quadprog, which uses an active set method similar to that described in [2]. It finds an initial feasible solution by first solving a linear programming problem. See Quadratic Programming in the "Introduction to Algorithms" section.


Large-Scale Optimization.   The large-scale code does not allow equal upper and lower bounds. For example if lb(2) == ub(2), then lsqlin gives the error

At this time, the medium-scale algorithm must be used to solve equality constrained problems.

Medium-Scale Optimization.   If the matrices C, A, or Aeq are sparse, and the problem formulation is not solvable using the large-scale code, lsqlin warns that the matrices are converted to full.

lsqlin gives a warning when the solution is infeasible.

In this case, lsqlin produces a result that minimizes the worst case constraint violation.

When the equality constraints are inconsistent, lsqlin gives


At this time, the only levels of display, using the Display parameter in options, are 'off' and 'final'; iterative output using 'iter' is not available.

See Also

\, lsqnonneg, quadprog


[1]  Coleman, T.F. and Y. Li, "A Reflective Newton Method for Minimizing a Quadratic Function Subject to Bounds on Some of the Variables", SIAM Journal on Optimization, Vol. 6, Number 4, pp. 1040-1058, 1996.

[2]  Gill, P.E., W. Murray, and M.H. Wright, Practical Optimization, Academic Press, London, UK, 1981.

  lsqcurvefit lsqnonlin