big M method

big M method

[‚big ′em ‚meth·əd]
(computer science)
A technique for solving linear programming problems in which artificial variables are assigned cost coefficients which are a very large number M, say, M = 1035.