Solve the constrained linear least-squares problem
where C, A, and Aeq are matrices and d, b, beq, lb, ub, and x are vectors.
x = lsqlin(C,d,A,b) x = lsqlin(C,d,A,b,Aeq,beq) x = lsqlin(C,d,A,b,Aeq,beq,lb,ub) x = lsqlin(C,d,A,b,Aeq,beq,lb,ub,x0) x = lsqlin(C,d,A,b,Aeq,beq,lb,ub,x0,options) x = lsqlin(C,d,A,b,Aeq,beq,lb,ub,x0,options,p1,p2,...) [x,resnorm] = lsqlin(...) [x,resnorm,residual] = lsqlin(...) [x,resnorm,residual,exitflag] = lsqlin(...) [x,resnorm,residual,exitflag,output] = lsqlin(...) [x,resnorm,residual,exitflag,output,lambda] = lsqlin(...)
x = lsqlin(C,d,A,b)
solves the linear system
C*x=d in the least-squares sense subject to
x = lsqlin(C,d,A,b,Aeq,beq)
solves the problem above while additionally satisfying the equality constraints
Aeq*x = beq. Set
b= if no inequalities exist.
x = lsqlin(C,d,A,b,Aeq,beq,lb,ub)
defines a set of lower and upper bounds on the design variables,
x, so that the solution is always in the range
lb <= x <= ub. Set
beq= if no equalities exist.
x = lsqlin(C,d,A,b,Aeq,beq,lb,ub,x0)
sets the starting point to
b= if no bounds exist.
x = lsqlin(C,d,A,b,Aeq,beq,lb,ub,x0,options)
minimizes with the optimization parameters specified in the structure
optimset to set these parameters.
x = lsqlin(C,d,A,b,Aeq,beq,lb,ub,x0,options,p1,p2,...)
passes the problem-dependent parameters
p1,p2,... directly to the Jacobian multiply function if it exists. Specify the Jacobian multiply function using the
[x,resnorm] = lsqlin(...)
returns the value of the squared 2-norm of the residual,
[x,resnorm,residual] = lsqlin(...)
returns the residual,
[x,resnorm,residual,exitflag] = lsqlin(...)
returns a value
exitflag that describes the exit condition.
[x,resnorm,residual,exitflag,output] = lsqlin(...)
returns a structure
output that contains information about the optimization.
[x,resnorm,residual,exitflag,output,lambda] = lsqlin(...)
returns a structure
lambda whose fields contain the Lagrange multipliers at the solution
Function Arguments contains general descriptions of arguments passed in to
lsqlin. Options provides the function-specific
options parameters details.
Function Arguments contains general descriptions of arguments returned by
lsqlin. This section provides function-specific details for
||The function converged to a solution
||The maximum number of function evaluations or iterations was exceeded.
||The function did not converge to a solution.
||Structure containing information about the optimization. The fields are:|
||Number of iterations taken
||Number of PCG iterations (large-scale algorithm only)
||Measure of first-order optimality (large-scale algorithm only)
For large-scale bound constrained problems, the first-order optimality is the infinity norm of
Optimization options parameters used by
lsqlin. You can set or change the values of these parameters using the
optimset function. Some parameters apply to all algorithms, some are only relevant when using the large-scale algorithm, and others are only relevant when using the medium-scale algorithm. SeeOptimization Parameters for detailed information.
We start by describing the
LargeScale option since it states a preference for which algorithm to use. It is only a preference since certain conditions must be met to use the large-scale algorithm. For
lsqlin, when the problem has only upper and lower bounds, i.e., no linear inequalities or equalities are specified, the default algorithm is the large-scale method. Otherwise the medium-scale algorithm is used:
||Use large-scale algorithm if possible when set to
Medium-Scale and Large-Scale Algorithms. These parameters are used by both the medium-scale and large-scale algorithms:
||Print diagnostic information about the function to be minimized.
||Level of display.
||Maximum number of iterations allowed.
Large-Scale Algorithm Only. These parameters are used only by the large-scale algorithm:
||Function handle for Jacobian multiply function. For large-scale structured problems, this function computes the Jacobian matrix products
See Quadratic Minimization with a Dense but Structured Hessian for a related example.
||Maximum number of PCG (preconditioned conjugate gradient) iterations (see the Algorithm section below).
||Upper bandwidth of preconditioner for PCG. By default, diagonal preconditioning is used (upper bandwidth of 0). For some problems, increasing the bandwidth reduces the number of PCG iterations.
||Termination tolerance on the function value.
||Termination tolerance on the PCG iteration.
Find the least-squares solution to the over-determined system subject to and .
First, enter the coefficient matrices and the lower and upper bounds.
C = [ 0.9501 0.7620 0.6153 0.4057 0.2311 0.4564 0.7919 0.9354 0.6068 0.0185 0.9218 0.9169 0.4859 0.8214 0.7382 0.4102 0.8912 0.4447 0.1762 0.8936]; d = [ 0.0578 0.3528 0.8131 0.0098 0.1388]; A =[ 0.2027 0.2721 0.7467 0.4659 0.1987 0.1988 0.4450 0.4186 0.6037 0.0152 0.9318 0.8462]; b =[ 0.5251 0.2026 0.6721]; lb = -0.1*ones(4,1); ub = 2*ones(4,1);
Next, call the constrained linear least-squares routine.
Nonzero elements of the vectors in the fields of
lambda indicate active constraints at the solution. In this case, the second inequality constraint (in
lambda.ineqlin) and the first lower and second lower bound constraints (in
lambda.lower) are active constraints (i.e., the solution is on their constraint boundaries).
For problems with no constraints,
\ should be used. For example,
Since the problem being solved is always convex,
lsqlin will find a global, although not necessarily unique, solution.
Better numerical results are likely if you specify equalities explicitly using
beq, instead of implicitly using
Large-Scale Optimization. If
x0 is not strictly feasible,
lsqlin chooses a new strictly feasible (centered) starting point.
If components of x have no upper (or lower) bounds, then
lsqlin prefers that the corresponding components of
lb) be set to
lb) as opposed to an arbitrary but very large positive (or negative in the case of lower bounds) number.
Large-Scale Optimization. When the problem given to
lsqlin has only upper and lower bounds, i.e., no linear inequalities or equalities are specified, and the matrix
C has at least as many rows as columns, the default algorithm is the large-scale method. This method is a subspace trust-region method based on the interior-reflective Newton method described in . Each iteration involves the approximate solution of a large linear system using the method of preconditioned conjugate gradients (PCG). See Trust-Region Methods for Nonlinear Minimization and Preconditioned Conjugate Gradients.
lsqlin, with the
LargeScale parameter set to
optimset, or when linear inequalities or equalities are given, is based on
quadprog, which uses an active set method similar to that described in . It finds an initial feasible solution by first solving a linear programming problem. See Quadratic Programming in the "Introduction to Algorithms" section.
Large-Scale Optimization. The large-scale code does not allow equal upper and lower bounds. For example if
lb(2) == ub(2), then
lsqlin gives the error
At this time, the medium-scale algorithm must be used to solve equality constrained problems.
Medium-Scale Optimization. If the matrices
Aeq are sparse, and the problem formulation is not solvable using the large-scale code,
lsqlin warns that the matrices are converted to full.
lsqlin gives a warning when the solution is infeasible.
In this case,
lsqlin produces a result that minimizes the worst case constraint violation.
When the equality constraints are inconsistent,
At this time, the only levels of display, using the
Display parameter in
'final'; iterative output using
'iter' is not available.
 Coleman, T.F. and Y. Li, "A Reflective Newton Method for Minimizing a Quadratic Function Subject to Bounds on Some of the Variables", SIAM Journal on Optimization, Vol. 6, Number 4, pp. 1040-1058, 1996.
 Gill, P.E., W. Murray, and M.H. Wright, Practical Optimization, Academic Press, London, UK, 1981.