Linear Programming Examples in Scilab
The goal of this page is to gather various examples of linear programming in Scilab. We focus here on continuous variables and do not cover integer programming. We consider dense matrices and do not cover sparse linear programs. We used the following functions :
- karmarkar, from Scilab v5.3.2,
- linpro, from the quapro module.
These examples must be used under the terms of the CeCILL.
Contents
The karmarkar function
The karmarkar function is a linear programming solver. It is able to solve the linear program either in standard form:
ineqlin: [7x1 constant] eqlin: [0x0 constant] lower: [3x1 constant] upper: [3x1 constant] Minimize c'*x such that: Aeq*x = beq x >= 0
or in the more general form with linear inequalities and bounds:
Minimize c'*x such that: Aeq*x = beq A*x <= b lb <= x <= ub
The help of the karmarkar function is provided in Scilab:
http://www.scilab.org/product/man/karmarkar.html
Despite its name, the algorithm is not based on the Karmarkar algorithm, but on a primal affine-scaling interior point algorithm. This algorithm discovered by Dikin in 1967, and then re-discovered by Barnes and Vanderbei et al in 1986.
Changes in Scilab 5.3.1
In the version 5.3.1 of Scilab we improved the karmarkar linear optimization solver. The goal was to improve the usability, and to provide a much more flexible solver. The detailed changes are the following:
- User can configure an output function.
- The number of iterations can be put as an output argument.
- x0 can be put as an optional argument.
- Management of inequality constraints.
- Management of bounds.
- Computation of the exitflag.
- Computation of the Lagrange multipliers.
- Documentation provided for the rtolf and gam parameters.
- Documentation provided for the stopping rule.
- More examples provided.
There are also bugs which have been fixed:
- Bug # 7193 fixed - The karmarkar help page did not document the eps, gamma, and crit arguments.
- Bug # 8719 fixed - The karmarkar function printed unwanted messages.
- Bug # 8720 fixed - The karmarkar function stopped too early in the iterations.
- Bug # 8726 fixed - The karmarkar function produced a division-by-zero error.
- Bug # 8727 fixed - The karmarkar function required the initial guess x0.
- Bug # 8775 fixed - The karmarkar function did not detect unbounded problems.
The quapro module
http://atoms.scilab.org/toolboxes/quapro
The quapro module defines linear quadratic programming solvers. The matrices defining the cost and constraints must be full, but the quadratic term matrix is not required to be full rank.
The features of the quapro module are :
- linpro : linear programming solver
- mps2linpro : convert lp problem given in MPS format to linpro format
- quapro : linear quadratic programming solver
This module was created by Eduardo Casas Renteria, Cecilia Pola Mendez (Universidad De Cantabria), improved by Serge Steer (INRIA) and maintained by Allan Cornet, Michaël Baudin (Consortium Scilab - DIGITEO).
To install the quapro module :
atomsInstall("quapro");
and then re-start Scilab.
In this page, we will focus of the linpro function.
The linpro function can solve linear programs in general form:
Minimize c'*x A*x <= b Aeq*x = beq lb <= x <= ub
Example #1
The following problem is extracted from "Practical Optimization", Antoniou, Lu, 2007, Chapter 11, "Linear Programming Part I: The simplex method", Example 11.9, p. 361. This problem is solved by the primal affine scaling algorithm in Chapter 12, "Linear Programming Part II: Interior point methods", Example 12.2, p.382.
The program is the following.
Minimize 2.x1 + 9.x2 + 3.x3 Such as 2.x1 - 2.x2 - x3 <= -1 -x1 - 4.x2 + x3 <= -1 x >= 0
The solution is
xstar = [0 1/3 1/3]';
With karmarkar
The following script solves the problem with the karmarkar function.
A = [ 2 -2 -1 -1 -4 1 ]; b = [-1;-1]; c = [2;9;3]; lb = [0;0;0]; [xopt,fopt,exitflag,iter,yopt] =karmarkar([],[],c,[],[],[],[],[],A,b,lb) yopt.ineqlin yopt.lower
This produces the following output.
-->[xopt,fopt,exitflag,iter,yopt] =karmarkar([],[],c,[],[],[],[],[],A,b,lb) yopt = ineqlin: [2x1 constant] eqlin: [0x0 constant] lower: [3x1 constant] upper: [3x1 constant] iter = 70. exitflag = 1. fopt = 4.00004 xopt = 0.0000016 0.3333387 0.3333296 -->yopt.ineqlin ans = 3.5 0.5 -->yopt.lower ans = 8.5 2.560D-09 - 1.790D-09
With linpro
The following script solves the problem with the linpro function.
A = [ 2 -2 -1 -1 -4 1 ]; b = [-1;-1]; c = [2;9;3]; ci = [0;0;0]; cs = [%inf;%inf;%inf]; [xopt,lagr,fopt]=linpro(c,A,b,ci,cs)
The previous script produces the following output.
-->[xopt,lagr,fopt]=linpro(c,A,b,ci,cs) fopt = 4. lagr = - 8.5 0. 0. 3.5 0.5 xopt = 0. 0.3333333 0.3333333
Example #2
This example is extracted from "Operations Research: applications and algorithms", Wayne L. Winstons, Section 5.2, "The Computer and Sensitivity Analysis", in the "Degeneracy and Sensitivity Analysis" subsection.
The linear program is:
Maximize 6*x1 + 4*x2 + 3*x3 + 2*x4 such as: 2*x1 + 3*x2 + x3 + 2 *x4 <= 400 x1 + x2 + 2*x3 + x4 <= 150 2*x1 + x2 + x3 + 0.5*x4 <= 200 3*x1 + x2 + x4 <= 250 x >= 0
The solution is
xstar = [50;100;0;0]
With karmarkar
The following script solves the problem with the karmarkar function.
c = [-6 -4 -3 -2]'; A = [ 2 3 1 2 1 1 2 1 2 1 1 0.5 3 1 0 1 ]; b = [400 150 200 250]'; lb = [0;0;0;0]; [xopt,fopt,exitflag,iter,yopt] =karmarkar([],[],c,[],[],[],[],[],A,b,lb)
The previous script produces the following output.
-->[xopt,fopt,exitflag,iter,yopt] =karmarkar([],[],c,[],[],[],[],[],A,b,lb) yopt = ineqlin: [4x1 constant] eqlin: [0x0 constant] lower: [4x1 constant] upper: [4x1 constant] iter = 69. exitflag = 1. fopt = - 699.99624 xopt = 50.000068 99.998479 0.0003436 0.0004409
With linpro
The following script allows to solve the problem with the linpro function.
c = [-6 -4 -3 -2]'; A = [ 2 3 1 2 1 1 2 1 2 1 1 0.5 3 1 0 1 ]; b = [400 150 200 250]'; ci=[0 0 0 0]'; cs=[%inf %inf %inf %inf]'; [xopt,lagr,fopt]=linpro(c,A,b,ci,cs)
This produces :
-->[xopt,lagr,fopt]=linpro(c,A,b,ci,cs) fopt = - 700. lagr = 0. 0. 0. - 1.5 0.5 1.25 0. 1.25 xopt = 50. 100. 2.842D-14 0.
Example #3
This example is extracted from "Operations Research: applications and algorithms", Wayne L. Winstons, Section 5.2, "The Computer and Sensitivity Analysis", from "Problems Group A, 1.".
We want to solve the linear program:
Maximize 150.x1+200.x2-10.x3 such as: x1+ x2 <= 45 6.x1+10.x2 - x3 <=0 5.x1 <= 140 4.x2 <= 120 x3 <= 350 x >= 0
The solution is:
xstar = [25,20,350]
With karmarkar
The following script solves the problem with the karmarkar function.
c = [-150 -200 +10]'; A = [ 1 1 0 6 10 -1 5 0 0 0 4 0 0 0 1 ]; b = [45 0 140 120 350]'; lb = [0;0;0]; [xopt,fopt,exitflag,iter,yopt] =karmarkar([],[],c,[],[],[],[],[],A,b,lb)
The previous script produces the following output.
-->[xopt,fopt,exitflag,iter,yopt] =karmarkar([],[],c,[],[],[],[],[],A,b,lb) yopt = ineqlin: [5x1 constant] eqlin: [0x0 constant] lower: [3x1 constant] upper: [3x1 constant] iter = 89. exitflag = 1. fopt = - 4249.9602 xopt = 25.00115 19.998673 349.99469
With linpro
The following script allows to solve the problem with the linpro function.
c = [-150 -200 +10]'; A = [ 1 1 0 6 10 -1 5 0 0 0 4 0 0 0 1 ]; b = [45 0 140 120 350]'; ci=[0 0 0]'; cs=[%inf %inf %inf]'; [xopt,lagr,fopt]=linpro(c,A,b,ci,cs)
This produces :
-->[xopt,lagr,fopt]=linpro(c,A,b,ci,cs) fopt = - 4250. lagr = 0. 0. 0. 75. 12.5 0. 0. 2.5 xopt = 25. 20. 350.
Example #4
Consider the linear program:
Minimize -20.x1 - 24.x2 such as: 3.x1 + 6.x2 <= 60 4.x1 + 2.x2 <= 32 x >= 0
The solution is
xstar = [4;8];
With karmarkar
The following script solves the problem with the karmarkar function.
c = [-20 -24]'; A = [ 3 6 4 2 ]; b = [60 32]'; lb = [0;0]; [xopt,fopt,exitflag,iter,yopt] =karmarkar([],[],c,[],[],[],[],[],A,b,lb)
The previous script produces the following output.
-->[xopt,fopt,exitflag,iter,yopt] =karmarkar([],[],c,[],[],[],[],[],A,b,lb) yopt = ineqlin: [2x1 constant] eqlin: [0x0 constant] lower: [2x1 constant] upper: [2x1 constant] iter = 66. exitflag = 1. fopt = - 271.99746 xopt = 3.9998866 7.9999887
With linpro
The following script allows to solve the problem with the linpro function.
c = [-20;-24]; A = [ 3 6 4 2 ]; b = [60;32]; ci=[0;0]; cs=[%inf;%inf]; [xopt,lagr,fopt]=linpro(c,A,b,ci,cs)
This produces:
-->[xopt,lagr,fopt]=linpro(c,A,b,ci,cs) fopt = - 272. lagr = 0. 0. 3.1111111 2.6666667 xopt = 4. 8.
Example #5
This example is from the Lipsol toolbox, by Yin Zhang, as ported into Scilab by Rubio Scola in example0.sce.
Minimize 2.x1 + 5.x2 - 2.5.x3 such as: x1 S4 x3 <= 5 E2 x1 - x2 - x3 <= 0 -2 <= x1 <= 2 1 <= x2 <= %inf 0 <= x3 <= 3
The solution is
xstar = [-2;1;3];
With karmarkar
The following script solves the problem with the karmarkar function.
S4 = sin(%pi/4)/4; E2 = exp(2); c = [ 2; 5; -2.5]; A = [ 1 0 S4 E2 -1 -1 1 0 0 0 0 1 -1 0 0 0 -1 0 0 0 -1 ]; b = [ 5; 0;2;3;2;-1;0]; lb = [-2;1;0]; ub = [2;%inf;3]; [xopt,fopt,exitflag,iter,yopt]=karmarkar([],[],c,[],[],[],[],[],A,b,lb,ub)
The previous script produces the following output.
-->[xopt,fopt,exitflag,iter,yopt]=karmarkar([],[],c,[],[],[],[],[],A,b,lb,ub) yopt = ineqlin: [7x1 constant] eqlin: [0x0 constant] lower: [3x1 constant] upper: [3x1 constant] iter = 75. exitflag 1. fopt = - 6.4999498 xopt = - 1.9999916 1.0000033 2.9999933
With linpro
The following script solves the problem with the linpro function.
S4 = sin(%pi/4)/4; E2 = exp(2); c = [ 2; 5; -2.5]; A = [ 1 0 S4 E2 -1 -1 ]; b = [ 5; 0]; ci = [-2;1;0]; cs = [2;%inf;3]; [xopt,lagr,fopt]=linpro(c,A,b,ci,cs)
The previous script produces the following output.
-->[xopt,lagr,fopt]=linpro(c,A,b,ci,cs) fopt = - 6.5 lagr = - 2. - 5. 2.5 0. 0. xopt = - 2. 1. 3.
Example #6
Minimize 600*x1 + 600*x2 + 600*x3 + 600*x4 + 600*x5 + 600*x6 + 600*x7 + 200*x8 + 200*x9 + 200*x10 + 200*x11 + 200*x12 + 200*x13 + 200*x14 such that 8*x1 + 8*x4 + 8*x5 + 8*x6 + 8*x7 + 4*x8 + 4*x11 + 4*x12 + 4*x13 + 4*x14 >= 88 8*x1 + 8*x2 + 8*x5 + 8*x6 + 8*x7 + 4*x8 + 4*x9 + 4*x12 + 4*x13 + 4*x14 >= 136 8*x1 + 8*x2 + 8*x3 + 8*x6 + 8*x7 + 4*x8 + 4*x9 + 4*x10 + 4*x13 + 4*x14 >= 104 8*x1 + 8*x2 + 8*x3 + 8*x4 + 8*x7 + 4*x8 + 4*x9 + 4*x10 + 4*x11 + 4*x14 >= 120 8*x1 + 8*x2 + 8*x3 + 8*x4 + 8*x5 + 4*x8 + 4*x9 + 4*x10 + 4*x11 + 4*x12 >= 152 8*x2 + 8*x3 + 8*x4 + 8*x5 + 8*x6 + 4*x9 + 4*x10 + 4*x11 + 4*x12 + 4*x13 >= 112 8*x3 + 8*x4 + 8*x5 + 8*x6 + 8*x7 + 4*x10 + 4*x11 + 4*x12 + 4*x13 + 4*x14 >= 128 - 20*x8 - 20*x9 - 20*x10 - 20*x11 - 20*x12 - 20*x13 - 20*x14 >= -840 xi >= 0 (i = 1,...,14)
There are several solutions to this problem, but all are associated with the objective function value:
fstar = 9200
With karmarkar
The following script solves the problem with the karmarkar function.
c = [600 600 600 600 600 600 600 200 200 200 200 200 200 200]'; A = [ 8 0 0 8 8 8 8 4 0 0 4 4 4 4 8 8 0 0 8 8 8 4 4 0 0 4 4 4 8 8 8 0 0 8 8 4 4 4 0 0 4 4 8 8 8 8 0 0 8 4 4 4 4 0 0 4 8 8 8 8 8 0 0 4 4 4 4 4 0 0 0 8 8 8 8 8 0 0 4 4 4 4 4 0 0 0 8 8 8 8 8 0 0 4 4 4 4 4 0 0 0 0 0 0 0 -20 -20 -20 -20 -20 -20 -20 ]; b = [88 136 104 120 152 112 128 -840]'; A = -A; b = -b; lb=zeros(14,1); [xopt,fopt,exitflag,iter,yopt]=karmarkar([],[],c,[],[],[],[],[],A,b,lb)
The previous script produces the following output.
-->[xopt,fopt,exitflag,iter,yopt]=karmarkar([],[],c,[],[],[],[],[],A,b,lb) yopt = ineqlin: [8x1 constant] eqlin: [0x0 constant] lower: [14x1 constant] upper: [14x1 constant] iter = 68. exitflag = 1. fopt = 9200.0482 xopt = 0.2130791 0.2373697 0.2444532 0.1395045 0.2749702 0.0000344 0.2240255 4.7906317 6.9752061 8.1870869 1.7117333 14.116657 0.0000689 6.2185468
With linpro
c = [600 600 600 600 600 600 600 200 200 200 200 200 200 200]'; A = [ 8 0 0 8 8 8 8 4 0 0 4 4 4 4 8 8 0 0 8 8 8 4 4 0 0 4 4 4 8 8 8 0 0 8 8 4 4 4 0 0 4 4 8 8 8 8 0 0 8 4 4 4 4 0 0 4 8 8 8 8 8 0 0 4 4 4 4 4 0 0 0 8 8 8 8 8 0 0 4 4 4 4 4 0 0 0 8 8 8 8 8 0 0 4 4 4 4 4 0 0 0 0 0 0 0 -20 -20 -20 -20 -20 -20 -20 ]; A = -A; b = [88 136 104 120 152 112 128 -840]'; b = -b; [n,p]=size(A); ci=zeros(p,1); cs=%inf*ones(p,1); [xopt,lagr,fopt]=linpro(c,A,b,ci,cs)
This produces
-->[xopt,lagr,fopt]=linpro(c,A,b,ci,cs) fopt = 9200. lagr = 0. - 6.519D-15 - 6.355D-14 3.411D-13 - 5.084D-13 - 200. 0. 1.990D-13 0. 0. 0. 0. - 100. 0. 3.038D-14 25. 0. 25. 25. 0. 25. 5. xopt = 1.475D-15 1.872D-15 - 9.103D-16 0. - 2.483D-16 1.448D-15 1.3333333 0. 12.666667 10. 0.6666667 14.666667 0. 4.
Acknowledgments
We thank Reinaldo Golmia Dante for providing the initial material for this document.
Appendix
Limitations of karmarkar before Scilab 5.3.1
Before Scilab 5.3.1, the karmarkar function required that we provide a strictly feasible initial guess. This is why the resolution required two steps:
- phase 1 : search for a feasible initial guess based on the resolution of a modified problem,
- phase 2 : resolution of the original problem.
Moreover, the only constraint which was taken into account was the linear equality constraint. This is why we had to introduce slack variables if we wanted to take into account linear inequalities such as A*x <= x.
The feasibility problem (phase 1) and the introduction of slack variables are described in the two next sections. These algorithms are now directly provided by Scilab 5.3.1, so that the user now implicitely use them, but does not have to explicitely call these functions.
Feasibility problem
In this section, we describe a method to find an initial guess x0 which is strictly feasible for a problem in standard form. This method is used by the karmarkar function since Scilab 5.3.1. The method that we present is classical and is presented for example in "A Primal-Dual Exterior Point Algorithm For Linear Programming Problems" by Samaras, Sifaleras, Triantafyllidis. Another version of the same idea was presented in "A modification of karmarkar's linear programming algorithm" by Vanderbei et al.
Assume that we must solve the linear program:
Minimize c'*x such as: Aeq*x = beq x >= 0
In order to find a feasible point, we solve the following problem :
Minimize z(p+1) such as: AAeq * z = bbeq z >= 0
where
z=[x(1) x(2) ... x(p) x(p+1)].
The matrix AAeq has the same number of rows, but one column more than Aeq. The initial guess for the modified problem is
z0=[1 1 ... 1]'=e(n+1),
where e(n+1) represents a column vector of n+1 ones. The last column of AAeq is
beq-Aeq*e(n)
so that the initial guess z0 satisfies the equation
AAeq*z0=beq.
Then we solve the modified linear program. There are three main cases.
- If the feasibility problem has no solution, then the original problem has no solution.
If the feasibility problem has a solution and z(p+1)>0, this means that no strictly feasible point x0 can be found. In this case, the value of z(p+1) measures the smallest offset that must be added to the inequality constraint to create a feasible linear program.
- If the feasibility problem has a solution and z(p+1)=0, we have an initial guess x0 and can move on to the phase 2.
The following function uses the previous method to find a feasible initial guess x0.
function x0 = karmarkar_searchx0 (Aeq,beq,c,eps,gam) // Search a stricly feasible initial guess for the linear program // // Minimize c'*x // Aeq*x = b // x >= 0 // // Copyright (C) 2010 - DIGITEO - Michael Baudin // Must be used under the terms of the CeCILL V2: http://www.cecill.info [n,p]=size(Aeq) cc = [zeros(p,1);1] AAeq = [Aeq,beq-Aeq*ones(p,1)] bbeq = beq z0 = ones(p+1,1) zopt=karmarkar(AAeq,bbeq,cc,z0,eps,gam) x0=zopt(1:p) endfunction
Slack and positive variables
In this section, we show how to change the formulation of an optimization problem in order to manage bounds or linear inequalities. This method is used by the karmarkar function since Scilab 5.3.1.
In practice, we may have to solve linear programs which may not be in standard form. Instead, we may have to solve programs in the form:
Minimize c'*x A*x <= b
where A is a n-by-p matrix, b is a n-by-1 column vector and c is a p-by-1 column vector. We can introduce the slack variable s, as a n-by-1 column vector which satisfy the equality:
A*x + s = b.
We must have
s >= 0.
The equality constraint can be written as
[A I]*[x;s] = b.
From there, there are two cases, depending on the constraint on x:
- either x must be non-negative,
- or x is unrestricted.
In the case where the original problem has the additionnal constraint
x >= 0,
we are done. It now suffices to consider the variable z=[x;s] and to modify the linear cost to cc=[c;0], where the zero column vector has n entries.
In the case where the variable x is unrestricted, we can write it as the difference of two positive variables :
x = xp - xn
where xp and xn are both non-negative. This leads to the linear program:
Minimize c'*(xp-xn) A*(xp-xn) + s = b xp,xn,s >= 0
The previous program can be written in matrix form:
Minimize [c;-c;0]'*[xp;xn;s] [A,-A,I]*[xp;xn;s] = b xp,xn,s >= 0
The previous program is a linear program in standard form :
Minimize cc'*z Aeq*z = b z >= 0
where
cc = [c;-c;0] Aeq=[A,-A,I]
The following function applies the previous method to preprocess a linear program in general form.
function [Aeq,beq,cc] = karmarkar_preprocess ( A , b , c ) // Converts the linear program // // Minimize c'*x // A*x <= b // // into // // Minimize cc'*z // Aeq*z = beq // z >= 0 // // where x = z(1:p) - z(p+1:2*p). // // Copyright (C) 2010 - DIGITEO - Michael Baudin // Must be used under the terms of the CeCILL V2: http://www.cecill.info [n,p]=size(A) Aeq = [A -A eye(n,n)] beq = b cc = [c;-c;zeros(n,1)] endfunction
An example of karmarkar for Scilab <= 5.3.0
In this section, we present the resolution of a linear program with Scilab versions older than 5.3.1. This example is the example #1.
We introduce slack variables and get:
Minimize 2.x1 + 9.x2 + 3.x3 2.x1 - 2.x2 - x3 + x4 = -1 -x1 - 4.x2 + x3 + x5 = -1 x >= 0
The following script solves the problem. Here, the initial guess x0 is given.
Aeq = [ 2 -2 -1 1 0 -1 -4 1 0 1 ]; beq = [-1;-1]; c = [2;9;3;0;0]; x0 = [0.2;0.7;1;1;1]; xopt=karmarkar(Aeq,beq,c,x0)
This produces the following output.
-->xopt=karmarkar(Aeq,beq,c,x0) 1. 0.856E+01 0.117E+00 2. 0.765E+01 0.107E+00 3. 0.690E+01 0.977E-01 4. 0.629E+01 0.878E-01 5. 0.580E+01 0.776E-01 6. 0.541E+01 0.672E-01 7. 0.510E+01 0.571E-01 8. 0.486E+01 0.477E-01 9. 0.467E+01 0.394E-01 10. 0.452E+01 0.321E-01 [...] 36. 0.400E+01 0.278E-04 37. 0.400E+01 0.209E-04 38. 0.400E+01 0.156E-04 39. 0.400E+01 0.117E-04 40. 0.400E+01 0.880E-05 xopt = 0.0000041 0.3333474 0.3333235 0.0000101 0.0000704
Now, assume that the initial guess x0 is unknown.
We solve the feasibility problem by introducing one extra variable x(p+1). The following script solves the problem.
-->x0 = karmarkar_searchx0 (Aeq,beq,c,0,0.999) 1. 0.100E-02 0.999E+00 2. 0.100E-05 0.999E-03 3. 0.100E-08 0.999E-06 4. 0.100E-11 0.999E-09 5. 0.100E-14 0.999E-12 6. 0.100E-17 0.999E-15 7. 0.100E-20 0.999E-18 8. 0.100E-23 0.999E-21 9. 0.100E-26 0.999E-24 10. 0.100E-29 0.999E-27 11. 0.100E-32 0.999E-30 12. 0.100E-35 0.999E-33 13. 0.100E-38 0.999E-36 14. 0.100E-41 0.999E-39 [...] 53. 0.100-158 0.999-156 54. 0.100-161 0.999-159 55. 0.100-161 0.000E+00 x0 = 0.497293 0.7455296 1.3277695 0.8242427 1.1516418
The previous script allows to produces the initial guess:
x0 = [0.497293;0.7455296;1.3277695;0.8242427;1.1516418]
We now plug the initial guess x0 into the original problem and get:
-->xopt=karmarkar(Aeq,beq,c,x0,1.e-14,0.999) 1. 0.537E+01 0.541E+00 2. 0.417E+01 0.222E+00 3. 0.401E+01 0.402E-01 4. 0.400E+01 0.108E-02 5. 0.400E+01 0.195E-03 6. 0.400E+01 0.316E-04 7. 0.400E+01 0.101E-05 8. 0.400E+01 0.156E-06 9. 0.400E+01 0.244E-07 10. 0.400E+01 0.957E-09 11. 0.400E+01 0.130E-09 12. 0.400E+01 0.193E-10 13. 0.400E+01 0.899E-12 14. 0.400E+01 0.110E-12 15. 0.403E+01 0.155E-13 16. 0.403E+01 0.832E-15 xopt = 3.885D-19 0.3361047 0.3362025 1.059D-16 1.218D-16
References
- "Iterative solution of problems of linear and quadratic programming", Dikin, Doklady Akademii Nausk SSSR, Vol. 174, pp. 747-748, 1967
- "A New Polynomial Time Algorithm for Linear Programming", Narendra Karmarkar, Combinatorica, Vol 4, nr. 4, p. 373–395, 1984.
- "A variation on Karmarkar’s algorithm for solving linear programming problems, Earl R. Barnes, Mathematical Programming, Volume 36, Number 2, 174-182, 1986.
- "A modification of karmarkar's linear programming algorithm", Robert J. Vanderbei, Marc S. Meketon and Barry A. Freedman, Algorithmica, Volume 1, Numbers 1-4, 395-407, 1986.
- "Practical Optimization: Algorithms and Engineering Applications", Andreas Antoniou, Wu-Sheng Lu, Springer, 2007, Chapter 12, "Linear Programming Part II: Interior Point Methods".
- "Global Convergence of a Long-Step Affine Scaling Algorithm for Degenerate Linear Programming Problems", Takashi Tsuchiya and Masakazu Muramatsu, SIAM J. Optim. Volume 5, Issue 3, pp. 525-551 (August 1995)
- "The convergence of dual variables", Dikin, Tech. report, Siberian Energy Institute, Russia, 1991
- "The Affine Scaling Algorithm Fails for Stepsize 0.999", Walter F. Mascarenhas, SIAM J. Optim. Volume 7, Issue 1, pp. 34-46 (1997)
- "A Primal-Dual Exterior Point Algorithm For Linear Programming Problems", Nikolaos Samaras, Angelo ifaleras, Charalampos Triantafyllidis, Yugoslav Journal of Operations Research, Vol 19 (2009), Number 1, 123-132
- "Operations Research: applications and algorithms", Wayne L. Winstons.