Optimization Toolbox 
Solve the constrained linear leastsquares problem
where C, A, and Aeq are matrices and d, b, beq, lb, ub, and x are vectors.
Syntax
x = lsqlin(C,d,A,b) x = lsqlin(C,d,A,b,Aeq,beq) x = lsqlin(C,d,A,b,Aeq,beq,lb,ub) x = lsqlin(C,d,A,b,Aeq,beq,lb,ub,x0) x = lsqlin(C,d,A,b,Aeq,beq,lb,ub,x0,options) x = lsqlin(C,d,A,b,Aeq,beq,lb,ub,x0,options,p1,p2,...) [x,resnorm] = lsqlin(...) [x,resnorm,residual] = lsqlin(...) [x,resnorm,residual,exitflag] = lsqlin(...) [x,resnorm,residual,exitflag,output] = lsqlin(...) [x,resnorm,residual,exitflag,output,lambda] = lsqlin(...)
Description
x = lsqlin(C,d,A,b)
solves the linear system C*x=d
in the leastsquares sense subject to A*x<=b
, where C
is m
byn
.
x = lsqlin(C,d,A,b,Aeq,beq)
solves the problem above while additionally satisfying the equality constraints Aeq*x = beq
. Set A=[]
and b=[]
if no inequalities exist.
x = lsqlin(C,d,A,b,Aeq,beq,lb,ub)
defines a set of lower and upper bounds on the design variables, x
, so that the solution is always in the range lb <= x <= ub
. Set Aeq=[]
and beq=[]
if no equalities exist.
x = lsqlin(C,d,A,b,Aeq,beq,lb,ub,x0)
sets the starting point to x0
. Set lb=[]
and b=[]
if no bounds exist.
x = lsqlin(C,d,A,b,Aeq,beq,lb,ub,x0,options)
minimizes with the optimization parameters specified in the structure options
. Use optimset
to set these parameters.
x = lsqlin(C,d,A,b,Aeq,beq,lb,ub,x0,options,p1,p2,...)
passes the problemdependent parameters p1,p2,...
directly to the Jacobian multiply function if it exists. Specify the Jacobian multiply function using the JacobMult
options
parameter.
[x,resnorm] = lsqlin(...)
returns the value of the squared 2norm of the residual, norm(C*xd)^2
.
[x,resnorm,residual] = lsqlin(...)
returns the residual, C*xd
.
[x,resnorm,residual,exitflag] = lsqlin(...)
returns a value exitflag
that describes the exit condition.
[x,resnorm,residual,exitflag,output] = lsqlin(...)
returns a structure output
that contains information about the optimization.
[x,resnorm,residual,exitflag,output,lambda] = lsqlin(...)
returns a structure lambda
whose fields contain the Lagrange multipliers at the solution x
.
Input Arguments
Function Arguments contains general descriptions of arguments passed in to lsqlin
. Options provides the functionspecific options
parameters details.
Output Arguments
Function Arguments contains general descriptions of arguments returned by lsqlin
. This section provides functionspecific details for exitflag
, lambda
, and output
:
exitflag 

> 0 
The function converged to a solution x . 

0 
The maximum number of function evaluations or iterations was exceeded. 

< 0 
The function did not converge to a solution. 

lambda 
Structure containing the Lagrange multipliers at the solution  
lower 
Lower bounds lb 

upper 
Upper bounds ub 

ineqlin 
Linear inequalities 

eqlin 
Linear equalities 

output 
Structure containing information about the optimization. The fields are:  
iterations 
Number of iterations taken 

algorithm 
Algorithm used 

cgiterations 
Number of PCG iterations (largescale algorithm only) 

firstorderopt 
Measure of firstorder optimality (largescale algorithm only) For largescale bound constrained problems, the firstorder optimality is the infinity norm of v.*g , where v is defined as in Box Constraints, and g is the gradient g = C^{T}Cx + C^{T}d (see Nonlinear LeastSquares). 
Options
Optimization options parameters used by lsqlin
. You can set or change the values of these parameters using the optimset
function. Some parameters apply to all algorithms, some are only relevant when using the largescale algorithm, and others are only relevant when using the mediumscale algorithm. SeeOptimization Parameters for detailed information.
We start by describing the LargeScale
option since it states a preference for which algorithm to use. It is only a preference since certain conditions must be met to use the largescale algorithm. For lsqlin
, when the problem has only upper and lower bounds, i.e., no linear inequalities or equalities are specified, the default algorithm is the largescale method. Otherwise the mediumscale algorithm is used:

Use largescale algorithm if possible when set to 'on' . Use mediumscale algorithm when set to 'off' . 
MediumScale and LargeScale Algorithms. These parameters are used by both the mediumscale and largescale algorithms:
LargeScale Algorithm Only. These parameters are used only by the largescale algorithm:
JacobMult 
Function handle for Jacobian multiply function. For largescale structured problems, this function computes the Jacobian matrix products J*Y , J'*Y , or J'*(J*Y) without actually forming J . The function is of the formwhere Jinfo and the additional parameters p1,p2,... contain the matrices used to compute J*Y (or J'*Y , or J'*(J*Y) ). Jinfo is the same as the first argument of lsqlin and p1,p2,... are the same additional parameters that are passed to lsqlin .Y is a matrix that has the same number of rows as there are dimensions in the problem. flag determines which product to compute. If flag == 0 then W = J'*(J*Y) . If flag > 0 then W = J*Y . If flag < 0 then W = J'*Y . In each case, J is not formed explicitly. lsqlin uses Jinfo to compute the preconditioner.See Quadratic Minimization with a Dense but Structured Hessian for a related example. 
MaxPCGIter 
Maximum number of PCG (preconditioned conjugate gradient) iterations (see the Algorithm section below). 
PrecondBandWidth 
Upper bandwidth of preconditioner for PCG. By default, diagonal preconditioning is used (upper bandwidth of 0). For some problems, increasing the bandwidth reduces the number of PCG iterations. 
TolFun 
Termination tolerance on the function value. 
TolPCG 
Termination tolerance on the PCG iteration. 
TypicalX 
Typical x values. 
Examples
Find the leastsquares solution to the overdetermined system subject to and .
First, enter the coefficient matrices and the lower and upper bounds.
C = [ 0.9501 0.7620 0.6153 0.4057 0.2311 0.4564 0.7919 0.9354 0.6068 0.0185 0.9218 0.9169 0.4859 0.8214 0.7382 0.4102 0.8912 0.4447 0.1762 0.8936]; d = [ 0.0578 0.3528 0.8131 0.0098 0.1388]; A =[ 0.2027 0.2721 0.7467 0.4659 0.1987 0.1988 0.4450 0.4186 0.6037 0.0152 0.9318 0.8462]; b =[ 0.5251 0.2026 0.6721]; lb = 0.1*ones(4,1); ub = 2*ones(4,1);
Next, call the constrained linear leastsquares routine.
Entering x
, lambda.ineqlin
, lambda.lower
, lambda.upper
gets
x = 0.1000 0.1000 0.2152 0.3502 lambda.ineqlin = 0 0.2392 0 lambda.lower = 0.0409 0.2784 0 0 lambda.upper = 0 0 0 0
Nonzero elements of the vectors in the fields of lambda
indicate active constraints at the solution. In this case, the second inequality constraint (in lambda.ineqlin
) and the first lower and second lower bound constraints (in lambda.lower
) are active constraints (i.e., the solution is on their constraint boundaries).
Notes
For problems with no constraints, \
should be used. For example, x= A\b
.
Since the problem being solved is always convex, lsqlin
will find a global, although not necessarily unique, solution.
Better numerical results are likely if you specify equalities explicitly using Aeq
and beq
, instead of implicitly using lb
and ub
.
LargeScale Optimization. If x0
is not strictly feasible, lsqlin
chooses a new strictly feasible (centered) starting point.
If components of x have no upper (or lower) bounds, then lsqlin
prefers that the corresponding components of ub
(or lb
) be set to Inf
(or Inf
for lb
) as opposed to an arbitrary but very large positive (or negative in the case of lower bounds) number.
Algorithm
LargeScale Optimization. When the problem given to lsqlin
has only upper and lower bounds, i.e., no linear inequalities or equalities are specified, and the matrix C
has at least as many rows as columns, the default algorithm is the largescale method. This method is a subspace trustregion method based on the interiorreflective Newton method described in [1]. Each iteration involves the approximate solution of a large linear system using the method of preconditioned conjugate gradients (PCG). See TrustRegion Methods for Nonlinear Minimization and Preconditioned Conjugate Gradients.
MediumScale Optimization. lsqlin
, with the LargeScale
parameter set to 'off'
with optimset,
or when linear inequalities or equalities are given, is based on quadprog
, which uses an active set method similar to that described in [2]. It finds an initial feasible solution by first solving a linear programming problem. See Quadratic Programming in the "Introduction to Algorithms" section.
Diagnostics
LargeScale Optimization. The largescale code does not allow equal upper and lower bounds. For example if lb(2) == ub(2)
, then lsqlin
gives the error
Equal upper and lower bounds not permitted in this largescale method. Use equality constraints and the mediumscale method instead.
At this time, the mediumscale algorithm must be used to solve equality constrained problems.
MediumScale Optimization. If the matrices C
, A
, or Aeq
are sparse, and the problem formulation is not solvable using the largescale code, lsqlin
warns that the matrices are converted to full.
Warning: This problem formulation not yet available for sparse matrices. Converting to full to solve.
lsqlin
gives a warning when the solution is infeasible.
In this case, lsqlin
produces a result that minimizes the worst case constraint violation.
When the equality constraints are inconsistent, lsqlin
gives
Limitations
At this time, the only levels of display, using the Display
parameter in options
, are 'off'
and 'final'
; iterative output using 'iter'
is not available.
See Also
References
[1] Coleman, T.F. and Y. Li, "A Reflective Newton Method for Minimizing a Quadratic Function Subject to Bounds on Some of the Variables", SIAM Journal on Optimization, Vol. 6, Number 4, pp. 10401058, 1996.
[2] Gill, P.E., W. Murray, and M.H. Wright, Practical Optimization, Academic Press, London, UK, 1981.
lsqcurvefit  lsqnonlin 