OR-Tools  8.2
revised_simplex.h
Go to the documentation of this file.
1// Copyright 2010-2018 Google LLC
2// Licensed under the Apache License, Version 2.0 (the "License");
3// you may not use this file except in compliance with the License.
4// You may obtain a copy of the License at
5//
6// http://www.apache.org/licenses/LICENSE-2.0
7//
8// Unless required by applicable law or agreed to in writing, software
9// distributed under the License is distributed on an "AS IS" BASIS,
10// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
11// See the License for the specific language governing permissions and
12// limitations under the License.
13
14// Solves a Linear Programing problem using the Revised Simplex algorithm
15// as described by G.B. Dantzig.
16// The general form is:
17// min c.x where c and x are n-vectors,
18// subject to Ax = b where A is an mxn-matrix, b an m-vector,
19// with l <= x <= u, i.e.
20// l_i <= x_i <= u_i for all i in {1 .. m}.
21//
22// c.x is called the objective function.
23// Each row a_i of A is an n-vector, and a_i.x = b_i is a linear constraint.
24// A is called the constraint matrix.
25// b is called the right hand side (rhs) of the problem.
26// The constraints l_i <= x_i <= u_i are called the generalized bounds
27// of the problem (most introductory textbooks only deal with x_i >= 0, as
28// did the first version of the Simplex algorithm). Note that l_i and u_i
29// can be -infinity and +infinity, respectively.
30//
31// To simplify the entry of data, this code actually handles problems in the
32// form:
33// min c.x where c and x are n-vectors,
34// subject to:
35// A1 x <= b1
36// A2 x >= b2
37// A3 x = b3
38// l <= x <= u
39//
40// It transforms the above problem into
41// min c.x where c and x are n-vectors,
42// subject to:
43// A1 x + s1 = b1
44// A2 x - s2 = b2
45// A3 x = b3
46// l <= x <= u
47// s1 >= 0, s2 >= 0
48// where xT = (x1, x2, x3),
49// s1 is an m1-vector (m1 being the height of A1),
50// s2 is an m2-vector (m2 being the height of A2).
51//
52// The following are very good references for terminology, data structures,
53// and algorithms. They all contain a wealth of references.
54//
55// Vasek Chvátal, "Linear Programming," W.H. Freeman, 1983. ISBN 978-0716715870.
56// http://www.amazon.com/dp/0716715872
57//
58// Robert J. Vanderbei, "Linear Programming: Foundations and Extensions,"
59// Springer, 2010, ISBN-13: 978-1441944979
60// http://www.amazon.com/dp/1441944974
61//
62// Istvan Maros, "Computational Techniques of the Simplex Method.", Springer,
63// 2002, ISBN 978-1402073328
64// http://www.amazon.com/dp/1402073321
65//
66// ===============================================
67// Short description of the dual simplex algorithm.
68//
69// The dual simplex algorithm uses the same data structure as the primal, but
70// progresses towards the optimal solution in a different way:
71// * It tries to keep the dual values dual-feasible at all time which means that
72// the reduced costs are of the correct sign depending on the bounds of the
73// non-basic variables. As a consequence the values of the basic variable are
74// out of bound until the optimal is reached.
75// * A basic leaving variable is selected first (dual pricing) and then a
76// corresponding entering variable is selected. This is done in such a way
77// that the dual objective value increases (lower bound on the optimal
78// solution).
79// * Once the basis pivot is chosen, the variable values and the reduced costs
80// are updated the same way as in the primal algorithm.
81//
82// Good references on the Dual simplex algorithm are:
83//
84// Robert Fourer, "Notes on the Dual simplex Method", March 14, 1994.
85// http://users.iems.northwestern.edu/~4er/WRITINGS/dual.pdf
86//
87// Achim Koberstein, "The dual simplex method, techniques for a fast and stable
88// implementation", PhD, Paderborn, Univ., 2005.
89// http://digital.ub.uni-paderborn.de/hs/download/pdf/3885?originalFilename=true
90
91#ifndef OR_TOOLS_GLOP_REVISED_SIMPLEX_H_
92#define OR_TOOLS_GLOP_REVISED_SIMPLEX_H_
93
94#include <string>
95#include <vector>
96
98#include "ortools/base/macros.h"
102#include "ortools/glop/parameters.pb.h"
105#include "ortools/glop/status.h"
116
117namespace operations_research {
118namespace glop {
119
120// Holds the statuses of all the variables, including slack variables. There
121// is no point storing constraint statuses since internally all constraints are
122// always fixed to zero.
123//
124// Note that this is the minimal amount of information needed to perform a "warm
125// start". Using this information and the original linear program, the basis can
126// be refactorized and all the needed quantities derived.
127//
128// TODO(user): Introduce another state class to store a complete state of the
129// solver. Using this state and the original linear program, the solver can be
130// restarted with as little time overhead as possible. This is especially useful
131// for strong branching in a MIP context.
133 // TODO(user): A MIP solver will potentially store a lot of BasicStates so
134 // memory usage is important. It is possible to use only 2 bits for one
135 // VariableStatus enum. To achieve this, the FIXED_VALUE status can be
136 // converted to either AT_LOWER_BOUND or AT_UPPER_BOUND and decoded properly
137 // later since this will be used with a given linear program. This way we can
138 // even encode more information by using the reduced cost sign to choose to
139 // which bound the fixed status correspond.
141
142 // Returns true if this state is empty.
143 bool IsEmpty() const { return statuses.empty(); }
144};
145
146// Entry point of the revised simplex algorithm implementation.
148 public:
150
151 // Sets or gets the algorithm parameters to be used on the next Solve().
152 void SetParameters(const GlopParameters& parameters);
153 const GlopParameters& GetParameters() const { return parameters_; }
154
155 // Solves the given linear program.
156 //
157 // Expects that the linear program is in the equations form Ax = 0 created by
158 // LinearProgram::AddSlackVariablesForAllRows, i.e. the rightmost square
159 // submatrix of A is an identity matrix, all its columns have been marked as
160 // slack variables, and the bounds of all constraints have been set to [0, 0].
161 // Returns ERROR_INVALID_PROBLEM, if these assumptions are violated.
162 //
163 // By default, the algorithm tries to exploit the computation done during the
164 // last Solve() call. It will analyze the difference of the new linear program
165 // and try to use the previously computed solution as a warm-start. To disable
166 // this behavior or give explicit warm-start data, use one of the State*()
167 // functions below.
168 ABSL_MUST_USE_RESULT Status Solve(const LinearProgram& lp,
170
171 // Do not use the current solution as a warm-start for the next Solve(). The
172 // next Solve() will behave as if the class just got created.
174
175 // Uses the given state as a warm-start for the next Solve() call.
176 void LoadStateForNextSolve(const BasisState& state);
177
178 // Advanced usage. Tells the next Solve() that the matrix inside the linear
179 // program will not change compared to the one used the last time Solve() was
180 // called. This allows to bypass the somewhat costly check of comparing both
181 // matrices. Note that this call will be ignored if Solve() was never called
182 // or if ClearStateForNextSolve() was called.
184
185 // Getters to retrieve all the information computed by the last Solve().
186 RowIndex GetProblemNumRows() const;
187 ColIndex GetProblemNumCols() const;
191 Fractional GetVariableValue(ColIndex col) const;
192 Fractional GetReducedCost(ColIndex col) const;
193 const DenseRow& GetReducedCosts() const;
194 Fractional GetDualValue(RowIndex row) const;
195 Fractional GetConstraintActivity(RowIndex row) const;
196 VariableStatus GetVariableStatus(ColIndex col) const;
198 const BasisState& GetState() const;
199 double DeterministicTime() const;
200 bool objective_limit_reached() const { return objective_limit_reached_; }
201
202 // If the problem status is PRIMAL_UNBOUNDED (respectively DUAL_UNBOUNDED),
203 // then the solver has a corresponding primal (respectively dual) ray to show
204 // the unboundness. From a primal (respectively dual) feasible solution any
205 // positive multiple of this ray can be added to the solution and keep it
206 // feasible. Moreover, by doing so, the objective of the problem will improve
207 // and its magnitude will go to infinity.
208 //
209 // Note that when the problem is DUAL_UNBOUNDED, the dual ray is also known as
210 // the Farkas proof of infeasibility of the problem.
211 const DenseRow& GetPrimalRay() const;
212 const DenseColumn& GetDualRay() const;
213
214 // This is the "dual ray" linear combination of the matrix rows.
215 const DenseRow& GetDualRayRowCombination() const;
216
217 // Returns the index of the column in the basis and the basis factorization.
218 // Note that the order of the column in the basis is important since it is the
219 // one used by the various solve functions provided by the BasisFactorization
220 // class.
221 ColIndex GetBasis(RowIndex row) const;
222
224 return update_row_.ComputeAndGetUnitRowLeftInverse(row);
225 }
226
227 // Returns a copy of basis_ vector for outside applications (like cuts) to
228 // have the correspondence between rows and columns of the dictionary.
229 RowToColMapping GetBasisVector() const { return basis_; }
230
232
233 // Returns statistics about this class as a string.
234 std::string StatString();
235
236 // Computes the dictionary B^-1*N on-the-fly row by row. Returns the resulting
237 // matrix as a vector of sparse rows so that it is easy to use it on the left
238 // side in the matrix multiplication. Runs in O(num_non_zeros_in_matrix).
239 // TODO(user): Use row scales as well.
240 RowMajorSparseMatrix ComputeDictionary(const DenseRow* column_scales);
241
242 // Initializes the matrix for the given 'linear_program' and 'state' and
243 // computes the variable values for basic variables using non-basic variables.
244 void ComputeBasicVariablesForState(const LinearProgram& linear_program,
245 const BasisState& state);
246
247 // This is used in a MIP context to polish the final basis. We assume that the
248 // columns for which SetIntegralityScale() has been called correspond to
249 // integral variable once multiplied by the given factor.
250 void ClearIntegralityScales() { integrality_scale_.clear(); }
251 void SetIntegralityScale(ColIndex col, Fractional scale);
252
253 private:
254 // Propagates parameters_ to all the other classes that need it.
255 //
256 // TODO(user): Maybe a better design is for them to have a reference to a
257 // unique parameters object? It will clutter a bit more these classes'
258 // constructor though.
259 void PropagateParameters();
260
261 // Returns a string containing the same information as with GetSolverStats,
262 // but in a much more human-readable format. For example:
263 // Problem status : Optimal
264 // Solving time : 1.843
265 // Number of iterations : 12345
266 // Time for solvability (first phase) : 1.343
267 // Number of iterations for solvability : 10000
268 // Time for optimization : 0.5
269 // Number of iterations for optimization : 2345
270 // Maximum time allowed in seconds : 6000
271 // Maximum number of iterations : 1000000
272 // Stop after first basis : 0
273 std::string GetPrettySolverStats() const;
274
275 // Returns a string containing formatted information about the variable
276 // corresponding to column col.
277 std::string SimpleVariableInfo(ColIndex col) const;
278
279 // Displays a short string with the current iteration and objective value.
280 void DisplayIterationInfo() const;
281
282 // Displays the error bounds of the current solution.
283 void DisplayErrors() const;
284
285 // Displays the status of the variables.
286 void DisplayInfoOnVariables() const;
287
288 // Displays the bounds of the variables.
289 void DisplayVariableBounds();
290
291 // Displays the following information:
292 // * Linear Programming problem as a dictionary, taking into
293 // account the iterations that have been made;
294 // * Variable info;
295 // * Reduced costs;
296 // * Variable bounds.
297 // A dictionary is in the form:
298 // xB = value + sum_{j in N} pa_ij x_j
299 // z = objective_value + sum_{i in N} rc_i x_i
300 // where the pa's are the coefficients of the matrix after the pivotings
301 // and the rc's are the reduced costs, i.e. the coefficients of the objective
302 // after the pivotings.
303 // Dictionaries are the modern way of presenting the result of an iteration
304 // of the Simplex algorithm in the literature.
305 void DisplayRevisedSimplexDebugInfo();
306
307 // Displays the Linear Programming problem as it was input.
308 void DisplayProblem() const;
309
310 // Returns the current objective value. This is just the sum of the current
311 // variable values times their current cost.
312 Fractional ComputeObjectiveValue() const;
313
314 // Returns the current objective of the linear program given to Solve() using
315 // the initial costs, maximization direction, objective offset and objective
316 // scaling factor.
317 Fractional ComputeInitialProblemObjectiveValue() const;
318
319 // Assigns names to variables. Variables in the input will be named
320 // x1..., slack variables will be s1... .
321 void SetVariableNames();
322
323 // Computes the initial variable status from its type. A constrained variable
324 // is set to the lowest of its 2 bounds in absolute value.
325 VariableStatus ComputeDefaultVariableStatus(ColIndex col) const;
326
327 // Sets the variable status and derives the variable value according to the
328 // exact status definition. This can only be called for non-basic variables
329 // because the value of a basic variable is computed from the values of the
330 // non-basic variables.
331 void SetNonBasicVariableStatusAndDeriveValue(ColIndex col,
332 VariableStatus status);
333
334 // Checks if the basis_ and is_basic_ arrays are well formed. Also checks that
335 // the variable statuses are consistent with this basis. Returns true if this
336 // is the case. This is meant to be used in debug mode only.
337 bool BasisIsConsistent() const;
338
339 // Moves the column entering_col into the basis at position basis_row. Removes
340 // the current basis column at position basis_row from the basis and sets its
341 // status to leaving_variable_status.
342 void UpdateBasis(ColIndex entering_col, RowIndex basis_row,
343 VariableStatus leaving_variable_status);
344
345 // Initializes matrix-related internal data. Returns true if this data was
346 // unchanged. If not, also sets only_change_is_new_rows to true if compared
347 // to the current matrix, the only difference is that new rows have been
348 // added (with their corresponding extra slack variables). Similarly, sets
349 // only_change_is_new_cols to true if the only difference is that new columns
350 // have been added, in which case also sets num_new_cols to the number of
351 // new columns.
352 bool InitializeMatrixAndTestIfUnchanged(const LinearProgram& lp,
353 bool* only_change_is_new_rows,
354 bool* only_change_is_new_cols,
355 ColIndex* num_new_cols);
356
357 // Initializes bound-related internal data. Returns true if unchanged.
358 bool InitializeBoundsAndTestIfUnchanged(const LinearProgram& lp);
359
360 // Checks if the only change to the bounds is the addition of new columns,
361 // and that the new columns have at least one bound equal to zero.
362 bool OldBoundsAreUnchangedAndNewVariablesHaveOneBoundAtZero(
363 const LinearProgram& lp, ColIndex num_new_cols);
364
365 // Initializes objective-related internal data. Returns true if unchanged.
366 bool InitializeObjectiveAndTestIfUnchanged(const LinearProgram& lp);
367
368 // Computes the stopping criterion on the problem objective value.
369 void InitializeObjectiveLimit(const LinearProgram& lp);
370
371 // Initializes the variable statuses using a warm-start basis.
372 void InitializeVariableStatusesForWarmStart(const BasisState& state,
373 ColIndex num_new_cols);
374
375 // Initializes the starting basis. In most cases it starts by the all slack
376 // basis and tries to apply some heuristics to replace fixed variables.
377 ABSL_MUST_USE_RESULT Status CreateInitialBasis();
378
379 // Sets the initial basis to the given columns, try to factorize it and
380 // recompute the basic variable values.
381 ABSL_MUST_USE_RESULT Status
382 InitializeFirstBasis(const RowToColMapping& initial_basis);
383
384 // Entry point for the solver initialization.
385 ABSL_MUST_USE_RESULT Status Initialize(const LinearProgram& lp);
386
387 // Saves the current variable statuses in solution_state_.
388 void SaveState();
389
390 // Displays statistics on what kinds of variables are in the current basis.
391 void DisplayBasicVariableStatistics();
392
393 // Tries to reduce the initial infeasibility (stored in error_) by using the
394 // singleton columns present in the problem. A singleton column is a column
395 // with only one non-zero. This is used by CreateInitialBasis().
396 void UseSingletonColumnInInitialBasis(RowToColMapping* basis);
397
398 // Returns the number of empty rows in the matrix, i.e. rows where all
399 // the coefficients are zero.
400 RowIndex ComputeNumberOfEmptyRows();
401
402 // Returns the number of empty columns in the matrix, i.e. columns where all
403 // the coefficients are zero.
404 ColIndex ComputeNumberOfEmptyColumns();
405
406 // This method transforms a basis for the first phase, with the optimal
407 // value at zero, into a feasible basis for the initial problem, thus
408 // preparing the execution of phase-II of the algorithm.
409 void CleanUpBasis();
410
411 // If the primal maximum residual is too large, recomputes the basic variable
412 // value from the non-basic ones. This function also perturbs the bounds
413 // during the primal simplex if too many iterations are degenerate.
414 //
415 // Only call this on a refactorized basis to have the best precision.
416 void CorrectErrorsOnVariableValues();
417
418 // Computes b - A.x in error_
419 void ComputeVariableValuesError();
420
421 // Solves the system B.d = a where a is the entering column (given by col).
422 // Known as FTRAN (Forward transformation) in FORTRAN codes.
423 // See Chvatal's book for more detail (Chapter 7).
424 void ComputeDirection(ColIndex col);
425
426 // Computes a - B.d in error_ and return the maximum std::abs() of its coeffs.
427 Fractional ComputeDirectionError(ColIndex col);
428
429 // Computes the ratio of the basic variable corresponding to 'row'. A target
430 // bound (upper or lower) is chosen depending on the sign of the entering
431 // reduced cost and the sign of the direction 'd_[row]'. The ratio is such
432 // that adding 'ratio * d_[row]' to the variable value changes it to its
433 // target bound.
434 template <bool is_entering_reduced_cost_positive>
435 Fractional GetRatio(RowIndex row) const;
436
437 // First pass of the Harris ratio test. Returns the harris ratio value which
438 // is an upper bound on the ratio value that the leaving variable can take.
439 // Fills leaving_candidates with the ratio and row index of a super-set of the
440 // columns with a ratio <= harris_ratio.
441 template <bool is_entering_reduced_cost_positive>
442 Fractional ComputeHarrisRatioAndLeavingCandidates(
443 Fractional bound_flip_ratio, SparseColumn* leaving_candidates) const;
444
445 // Chooses the leaving variable, considering the entering column and its
446 // associated reduced cost. If there was a precision issue and the basis is
447 // not refactorized, set refactorize to true. Otherwise, the row number of the
448 // leaving variable is written in *leaving_row, and the step length
449 // is written in *step_length.
450 Status ChooseLeavingVariableRow(ColIndex entering_col,
451 Fractional reduced_cost, bool* refactorize,
452 RowIndex* leaving_row,
453 Fractional* step_length,
455
456 // Chooses the leaving variable for the primal phase-I algorithm. The
457 // algorithm follows more or less what is described in Istvan Maros's book in
458 // chapter 9.6 and what is done for the dual phase-I algorithm which was
459 // derived from Koberstein's PhD. Both references can be found at the top of
460 // this file.
461 void PrimalPhaseIChooseLeavingVariableRow(ColIndex entering_col,
462 Fractional reduced_cost,
463 bool* refactorize,
464 RowIndex* leaving_row,
465 Fractional* step_length,
466 Fractional* target_bound) const;
467
468 // Chooses an infeasible basic variable. The returned values are:
469 // - leaving_row: the basic index of the infeasible leaving variable
470 // or kNoLeavingVariable if no such row exists: the dual simplex algorithm
471 // has terminated and the optimal has been reached.
472 // - cost_variation: how much do we improve the objective by moving one unit
473 // along this dual edge.
474 // - target_bound: the bound at which the leaving variable should go when
475 // leaving the basis.
476 ABSL_MUST_USE_RESULT Status DualChooseLeavingVariableRow(
477 RowIndex* leaving_row, Fractional* cost_variation,
479
480 // Updates the prices used by DualChooseLeavingVariableRow() after a simplex
481 // iteration by using direction_. The prices are stored in
482 // dual_pricing_vector_. Note that this function only takes care of the
483 // entering and leaving column dual feasibility status change and that other
484 // changes will be dealt with by DualPhaseIUpdatePriceOnReducedCostsChange().
485 void DualPhaseIUpdatePrice(RowIndex leaving_row, ColIndex entering_col);
486
487 // Updates the prices used by DualChooseLeavingVariableRow() when the reduced
488 // costs of the given columns have changed.
489 template <typename Cols>
490 void DualPhaseIUpdatePriceOnReducedCostChange(const Cols& cols);
491
492 // Same as DualChooseLeavingVariableRow() but for the phase I of the dual
493 // simplex. Here the objective is not to minimize the primal infeasibility,
494 // but the dual one, so the variable is not chosen in the same way. See
495 // "Notes on the Dual simplex Method" or Istvan Maros, "A Piecewise Linear
496 // Dual Phase-1 Algorithm for the Simplex Method", Computational Optimization
497 // and Applications, October 2003, Volume 26, Issue 1, pp 63-81.
498 // http://rd.springer.com/article/10.1023%2FA%3A1025102305440
499 ABSL_MUST_USE_RESULT Status DualPhaseIChooseLeavingVariableRow(
500 RowIndex* leaving_row, Fractional* cost_variation,
502
503 // Makes sure the boxed variable are dual-feasible by setting them to the
504 // correct bound according to their reduced costs. This is called
505 // Dual feasibility correction in the literature.
506 //
507 // Note that this function is also used as a part of the bound flipping ratio
508 // test by flipping the boxed dual-infeasible variables at each iteration.
509 //
510 // If update_basic_values is true, the basic variable values are updated.
511 template <typename BoxedVariableCols>
512 void MakeBoxedVariableDualFeasible(const BoxedVariableCols& cols,
513 bool update_basic_values);
514
515 // Computes the step needed to move the leaving_row basic variable to the
516 // given target bound.
517 Fractional ComputeStepToMoveBasicVariableToBound(RowIndex leaving_row,
519
520 // Returns true if the basis obtained after the given pivot can be factorized.
521 bool TestPivot(ColIndex entering_col, RowIndex leaving_row);
522
523 // Gets the current LU column permutation from basis_representation,
524 // applies it to basis_ and then sets it to the identity permutation since
525 // it will no longer be needed during solves. This function also updates all
526 // the data that depends on the column order in basis_.
527 void PermuteBasis();
528
529 // Updates the system state according to the given basis pivot.
530 // Returns an error if the update could not be done because of some precision
531 // issue.
532 ABSL_MUST_USE_RESULT Status UpdateAndPivot(ColIndex entering_col,
533 RowIndex leaving_row,
535
536 // Displays all the timing stats related to the calling object.
537 void DisplayAllStats();
538
539 // Returns whether or not a basis refactorization is needed at the beginning
540 // of the main loop in Minimize() or DualMinimize(). The idea is that if a
541 // refactorization is going to be needed by one of the components, it is
542 // better to do that as soon as possible so that every component can take
543 // advantage of it.
544 bool NeedsBasisRefactorization(bool refactorize);
545
546 // Calls basis_factorization_.Refactorize() depending on the result of
547 // NeedsBasisRefactorization(). Invalidates any data structure that depends
548 // on the current factorization. Sets refactorize to false.
549 Status RefactorizeBasisIfNeeded(bool* refactorize);
550
551 // Minimize the objective function, be it for satisfiability or for
552 // optimization. Used by Solve().
553 ABSL_MUST_USE_RESULT Status Minimize(TimeLimit* time_limit);
554
555 // Same as Minimize() for the dual simplex algorithm.
556 // TODO(user): remove duplicate code between the two functions.
557 ABSL_MUST_USE_RESULT Status DualMinimize(TimeLimit* time_limit);
558
559 // Experimental. This is useful in a MIP context. It performs a few degenerate
560 // pivot to try to mimize the fractionality of the optimal basis.
561 //
562 // We assume that the columns for which SetIntegralityScale() has been called
563 // correspond to integral variable once scaled by the given factor.
564 //
565 // I could only find slides for the reference of this "LP Solution Polishing
566 // to improve MIP Performance", Matthias Miltenberger, Zuse Institute Berlin.
567 ABSL_MUST_USE_RESULT Status Polish(TimeLimit* time_limit);
568
569 // Utility functions to return the current ColIndex of the slack column with
570 // given number. Note that currently, such columns are always present in the
571 // internal representation of a linear program.
572 ColIndex SlackColIndex(RowIndex row) const;
573
574 // Advances the deterministic time in time_limit with the difference between
575 // the current internal deterministic time and the internal deterministic time
576 // during the last call to this method.
577 // TODO(user): Update the internals of revised simplex so that the time
578 // limit is updated at the source and remove this method.
579 void AdvanceDeterministicTime(TimeLimit* time_limit);
580
581 // Problem status
582 ProblemStatus problem_status_;
583
584 // Current number of rows in the problem.
585 RowIndex num_rows_;
586
587 // Current number of columns in the problem.
588 ColIndex num_cols_;
589
590 // Index of the first slack variable in the input problem. We assume that all
591 // variables with index greater or equal to first_slack_col_ are slack
592 // variables.
593 ColIndex first_slack_col_;
594
595 // We're using vectors after profiling and looking at the generated assembly
596 // it's as fast as std::unique_ptr as long as the size is properly reserved
597 // beforehand.
598
599 // Compact version of the matrix given to Solve().
600 CompactSparseMatrix compact_matrix_;
601
602 // The transpose of compact_matrix_, it may be empty if it is not needed.
603 CompactSparseMatrix transposed_matrix_;
604
605 // Stop the algorithm and report feasibility if:
606 // - The primal simplex is used, the problem is primal-feasible and the
607 // current objective value is strictly lower than primal_objective_limit_.
608 // - The dual simplex is used, the problem is dual-feasible and the current
609 // objective value is strictly greater than dual_objective_limit_.
610 Fractional primal_objective_limit_;
611 Fractional dual_objective_limit_;
612
613 // Current objective (feasibility for Phase-I, user-provided for Phase-II).
614 DenseRow current_objective_;
615
616 // Array of coefficients for the user-defined objective.
617 // Indexed by column number. Used in Phase-II.
618 DenseRow objective_;
619
620 // Objective offset and scaling factor of the linear program given to Solve().
621 // This is used to display the correct objective values in the logs with
622 // ComputeInitialProblemObjectiveValue().
623 Fractional objective_offset_;
624 Fractional objective_scaling_factor_;
625
626 // Array of values representing variable bounds. Indexed by column number.
627 DenseRow lower_bound_;
628 DenseRow upper_bound_;
629
630 // Used in dual phase I to keep track of the non-basic dual infeasible
631 // columns and their sign of infeasibility (+1 or -1).
632 DenseRow dual_infeasibility_improvement_direction_;
633 int num_dual_infeasible_positions_;
634
635 // Used in dual phase I to hold the price of each possible leaving choices
636 // and the bitset of the possible leaving candidates.
637 DenseColumn dual_pricing_vector_;
638 DenseBitColumn is_dual_entering_candidate_;
639
640 // A temporary scattered column that is always reset to all zero after use.
641 ScatteredColumn initially_all_zero_scratchpad_;
642
643 // Array of column index, giving the column number corresponding
644 // to a given basis row.
645 RowToColMapping basis_;
646
647 // Vector of strings containing the names of variables.
648 // Indexed by column number.
650
651 // Information about the solution computed by the last Solve().
652 Fractional solution_objective_value_;
653 DenseColumn solution_dual_values_;
654 DenseRow solution_reduced_costs_;
655 DenseRow solution_primal_ray_;
656 DenseColumn solution_dual_ray_;
657 DenseRow solution_dual_ray_row_combination_;
658 BasisState solution_state_;
659 bool solution_state_has_been_set_externally_;
660
661 // Flag used by NotifyThatMatrixIsUnchangedForNextSolve() and changing
662 // the behavior of Initialize().
663 bool notify_that_matrix_is_unchanged_ = false;
664
665 // This is known as 'd' in the literature and is set during each pivot to the
666 // right inverse of the basic entering column of A by ComputeDirection().
667 // ComputeDirection() also fills direction_.non_zeros with the position of the
668 // non-zero.
669 ScatteredColumn direction_;
670 Fractional direction_infinity_norm_;
671
672 // Used to compute the error 'b - A.x' or 'a - B.d'.
673 DenseColumn error_;
674
675 // Representation of matrix B using eta matrices and LU decomposition.
676 BasisFactorization basis_factorization_;
677
678 // Classes responsible for maintaining the data of the corresponding names.
679 VariablesInfo variables_info_;
680 VariableValues variable_values_;
681 DualEdgeNorms dual_edge_norms_;
682 PrimalEdgeNorms primal_edge_norms_;
683 UpdateRow update_row_;
684 ReducedCosts reduced_costs_;
685
686 // Class holding the algorithms to choose the entering column during a simplex
687 // pivot.
688 EnteringVariable entering_variable_;
689
690 // Temporary memory used by DualMinimize().
691 std::vector<ColIndex> bound_flip_candidates_;
692 std::vector<std::pair<RowIndex, ColIndex>> pair_to_ignore_;
693
694 // Total number of iterations performed.
695 uint64 num_iterations_;
696
697 // Number of iterations performed during the first (feasibility) phase.
698 uint64 num_feasibility_iterations_;
699
700 // Number of iterations performed during the second (optimization) phase.
701 uint64 num_optimization_iterations_;
702
703 // Total time spent in Solve().
704 double total_time_;
705
706 // Time spent in the first (feasibility) phase.
707 double feasibility_time_;
708
709 // Time spent in the second (optimization) phase.
710 double optimization_time_;
711
712 // The internal deterministic time during the most recent call to
713 // RevisedSimplex::AdvanceDeterministicTime.
714 double last_deterministic_time_update_;
715
716 // Statistics about the iterations done by Minimize().
717 struct IterationStats : public StatsGroup {
718 IterationStats()
719 : StatsGroup("IterationStats"),
720 total("total", this),
721 normal("normal", this),
722 bound_flip("bound_flip", this),
723 degenerate("degenerate", this),
724 degenerate_run_size("degenerate_run_size", this) {}
725 TimeDistribution total;
726 TimeDistribution normal;
727 TimeDistribution bound_flip;
728 TimeDistribution degenerate;
729 IntegerDistribution degenerate_run_size;
730 };
731 IterationStats iteration_stats_;
732
733 struct RatioTestStats : public StatsGroup {
734 RatioTestStats()
735 : StatsGroup("RatioTestStats"),
736 bound_shift("bound_shift", this),
737 abs_used_pivot("abs_used_pivot", this),
738 abs_tested_pivot("abs_tested_pivot", this),
739 abs_skipped_pivot("abs_skipped_pivot", this),
740 direction_density("direction_density", this),
741 leaving_choices("leaving_choices", this),
742 num_perfect_ties("num_perfect_ties", this) {}
743 DoubleDistribution bound_shift;
744 DoubleDistribution abs_used_pivot;
745 DoubleDistribution abs_tested_pivot;
746 DoubleDistribution abs_skipped_pivot;
747 RatioDistribution direction_density;
748 IntegerDistribution leaving_choices;
749 IntegerDistribution num_perfect_ties;
750 };
751 mutable RatioTestStats ratio_test_stats_;
752
753 // Placeholder for all the function timing stats.
754 // Mutable because we time const functions like ChooseLeavingVariableRow().
755 mutable StatsGroup function_stats_;
756
757 // Proto holding all the parameters of this algorithm.
758 //
759 // Note that parameters_ may actually change during a solve as the solver may
760 // dynamically adapt some values. It is why we store the argument of the last
761 // SetParameters() call in initial_parameters_ so the next Solve() can reset
762 // it correctly.
763 GlopParameters parameters_;
764 GlopParameters initial_parameters_;
765
766 // LuFactorization used to test if a pivot will cause the new basis to
767 // not be factorizable.
768 LuFactorization test_lu_;
769
770 // Number of degenerate iterations made just before the current iteration.
771 int num_consecutive_degenerate_iterations_;
772
773 // Indicate if we are in the feasibility_phase (1st phase) or not.
774 bool feasibility_phase_;
775
776 // Indicates whether simplex ended due to the objective limit being reached.
777 // Note that it's not enough to compare the final objective value with the
778 // limit due to numerical issues (i.e., the limit which is reached within
779 // given tolerance on the internal objective may no longer be reached when the
780 // objective scaling and offset are taken into account).
781 bool objective_limit_reached_;
782
783 // Temporary SparseColumn used by ChooseLeavingVariableRow().
784 SparseColumn leaving_candidates_;
785
786 // Temporary vector used to hold the best leaving column candidates that are
787 // tied using the current choosing criteria. We actually only store the tied
788 // candidate #2, #3, ...; because the first tied candidate is remembered
789 // anyway.
790 std::vector<RowIndex> equivalent_leaving_choices_;
791
792 // A random number generator.
793 random_engine_t random_;
794
795 // This is used by Polish().
796 DenseRow integrality_scale_;
797
798 DISALLOW_COPY_AND_ASSIGN(RevisedSimplex);
799};
800
801// Hides the details of the dictionary matrix implementation. In the future,
802// GLOP will support generating the dictionary one row at a time without having
803// to store the whole matrix in memory.
805 public:
807
808 // RevisedSimplex cannot be passed const because we have to call a non-const
809 // method ComputeDictionary.
810 // TODO(user): Overload this to take RevisedSimplex* alone when the
811 // caller would normally pass a nullptr for col_scales so this and
812 // ComputeDictionary can take a const& argument.
814 RevisedSimplex* revised_simplex)
815 : dictionary_(
816 ABSL_DIE_IF_NULL(revised_simplex)->ComputeDictionary(col_scales)),
817 basis_vars_(ABSL_DIE_IF_NULL(revised_simplex)->GetBasisVector()) {}
818
819 ConstIterator begin() const { return dictionary_.begin(); }
820 ConstIterator end() const { return dictionary_.end(); }
821
822 size_t NumRows() const { return dictionary_.size(); }
823
824 // TODO(user): This function is a better fit for the future custom iterator.
825 ColIndex GetBasicColumnForRow(RowIndex r) const { return basis_vars_[r]; }
826 SparseRow GetRow(RowIndex r) const { return dictionary_[r]; }
827
828 private:
829 const RowMajorSparseMatrix dictionary_;
830 const RowToColMapping basis_vars_;
831 DISALLOW_COPY_AND_ASSIGN(RevisedSimplexDictionary);
832};
833
834// TODO(user): When a row-by-row generation of the dictionary is supported,
835// implement DictionaryIterator class that would call it inside operator*().
836
837} // namespace glop
838} // namespace operations_research
839
840#endif // OR_TOOLS_GLOP_REVISED_SIMPLEX_H_
#define ABSL_DIE_IF_NULL
Definition: base/logging.h:39
size_type size() const
bool empty() const
ParentType::const_iterator const_iterator
Definition: strong_vector.h:90
A simple class to enforce both an elapsed time limit and a deterministic time limit in the same threa...
Definition: time_limit.h:105
RevisedSimplexDictionary(const DenseRow *col_scales, RevisedSimplex *revised_simplex)
RowMajorSparseMatrix::const_iterator ConstIterator
const GlopParameters & GetParameters() const
const DenseRow & GetDualRayRowCombination() const
Fractional GetVariableValue(ColIndex col) const
void SetIntegralityScale(ColIndex col, Fractional scale)
Fractional GetConstraintActivity(RowIndex row) const
VariableStatus GetVariableStatus(ColIndex col) const
Fractional GetReducedCost(ColIndex col) const
const DenseColumn & GetDualRay() const
ABSL_MUST_USE_RESULT Status Solve(const LinearProgram &lp, TimeLimit *time_limit)
const ScatteredRow & GetUnitRowLeftInverse(RowIndex row)
RowMajorSparseMatrix ComputeDictionary(const DenseRow *column_scales)
Fractional GetDualValue(RowIndex row) const
ConstraintStatus GetConstraintStatus(RowIndex row) const
void ComputeBasicVariablesForState(const LinearProgram &linear_program, const BasisState &state)
void LoadStateForNextSolve(const BasisState &state)
const BasisFactorization & GetBasisFactorization() const
ColIndex GetBasis(RowIndex row) const
void SetParameters(const GlopParameters &parameters)
const ScatteredRow & ComputeAndGetUnitRowLeftInverse(RowIndex leaving_row)
Definition: update_row.cc:56
SatParameters parameters
SharedTimeLimit * time_limit
int64_t int64
uint64_t uint64
ColIndex col
Definition: markowitz.cc:176
RowIndex row
Definition: markowitz.cc:175
StrictITIVector< ColIndex, Fractional > DenseRow
Definition: lp_types.h:299
The vehicle routing library lets one model and solve generic vehicle routing problems ranging from th...
std::mt19937 random_engine_t
Definition: random_engine.h:23
Fractional target_bound