3.7 Fast Backtracking and Restarts

Most clause learning algorithms use fast backtracking or conflict directed backjumping where one uses the conflict graph to undo not only the last branching decision but also all other recent decisions that did not contribute to the current conflict (Stallman and Sussman, 1977). In particular, the SAT solver zChaff that we use for our experiments backtracks to decision level zero when it learns a unit clause. This property influences the structure of the sequence generation algorithm presented in Section 6.1.1.

More precisely, the level that a clause learning algorithm employing this technique backtracks to is one less than the maximum of the decision levels of all decision variables (i.e. the sources of the conflict) present in the underlying conflict graph. Note that the current conflict might use clauses learned earlier as a result of branching on the apparently redundant variables. This implies that fast backtracking in general cannot be replaced by a ``good'' branching sequence that does not produce redundant branches. For the same reason, fast backtracking cannot either be replaced by simply learning the decision scheme clause. However, the results we present in this paper are independent of whether or not fast backtracking is used.

Restarts allow clause learning algorithms to arbitrarily restart their branching process from decision level zero. All clauses learned so far are however retained and now treated as additional initial clauses (Baptista and Silva, 2000). As we will show, unlimited restarts, performed at the correct step, can make clause learning very powerful. In practice, this requires extending the strategy employed by the solver to include when and how often to restart. Unless otherwise stated, clause learning proofs will be assumed to allow no restarts.


Journal of Artificial Intelligence Research 22 (2004). Copyright AI Access Foundation. All rights reserved.