big M method

big M method

[‚big ′em ‚meth·əd]
(computer science)
A technique for solving linear programming problems in which artificial variables are assigned cost coefficients which are a very large number M, say, M = 1035.
McGraw-Hill Dictionary of Scientific & Technical Terms, 6E, Copyright © 2003 by The McGraw-Hill Companies, Inc.