We begin this section by showing that CL proofs, irrespective of the learning scheme, branching strategy, or restarts used, can be efficiently simulated by RES. In the reverse direction, we show that CL, with a slight variation and with unlimited restarts, can efficiently simulate RES in its full generality. The variation relates to the variables one is allowed to branch upon.
Proof. Given a CL proof of F, a RES proof can be
constructed by sequentially deriving all clauses that learns,
which includes the empty clause . From Proposition
4, all these
derivations are trivial and hence require at most n steps
each. Consequently, the size of the resulting RES proof is
at most
n . size(). Note that since we derive clauses of individually,
restarts in do not affect the construction.
Of course, such a relaxation is useless in ordinary DPLL; there is no benefit in branching on a variable that doesn't even appear in the residual formula. However, with clause learning, such a branch can lead to an immediate conflict and allow one to learn a key conflict clause that would otherwise have not been learned. We will use this property to show that RES can be efficiently simulated by CL- with enough restarts.
We first state a generalization of Lemma 3. CL- can, by definition, do all that usual CL can, and is potentially stronger. The simulation of CL by RES can in fact be extended to CL- as well. The proof goes exactly as the proof of Lemma 3 and uses the easy fact that Proposition 4 doesn't change even when one is allowed to branch on variables that are already set. This gives us:
Proof. Let be a RES proof of F of size s. Assume without loss of generality as in the proof of Lemma 1 that does not contain a derived clause Ci whose strict subclause C'i can be derived by resolving two clauses occurring previously in . The proof of this Lemma is very similar to that of Lemma 1. However, since we do not have trace variable to allow us to simulate each resolution step individually and independently, we use explicit restarts.
Viewing as a sequence of clauses, its last two elements must be a literal, say v, and . Let S = (F {v,}). Let (C1, C2,..., Ck) be the subsequence of that has precisely the clauses in S. Note that Ci v for some i, 1ik. For convenience, define an extended branching sequence to be a branching sequence in which certain places, instead of being literals, can be marked as restart points. Let be the extended branching sequence consisting of all literals of C1, followed by a restart point, followed by all literals of C2, followed by a second restart point, and so on up to Ck. We claim that induces a CL- proof of F using any non-redundant learning scheme. To prove this, we show by induction that after the ith restart point in , the CL- procedure has learned clauses C1, C2,..., Ci and is at decision level zero.
The base case for induction, i = 0, is trivial. No clauses have been learned and the clause learning procedure is at decision level zero. Suppose the inductive claim holds after the (i - 1)st restart point in . Let Ci = (x1 x2 ... xl). Ci must have been derived in by resolving two clauses (A y) and (B y) coming from F {C1, C2,..., Ci-1}, where Ci = (A B). Continuing to branch according to till before the ith restart point makes the CL- procedure set all if x1, x2,..., xl to FALSE. Note that when all literals appearing in A and B are distinct, the last branch on xl here is on a variable that is already set because of unit propagation. CL-, however, allows this. At this stage, unit propagation using (A y) and (B y) implies y as well as y, leading to a conflict. The conflict graph consists of xj's, 1jl, as the decision literals, y and y as implied literals, and . The only new conflict clause that can learned from this very simple conflict graph is Ci. Thus, Ci is learned using any non-redundant learning scheme and the ith restart executed, as dictated by . At this stage, the CL- procedure has learned clauses C1, C2,..., Ci, and is at decision level zero. This completes the inductive step.
The inductive proof above shows that when the CL- procedure has
finished with the kth restart in , it will have learned
all clauses in S. Adding to this the initial clauses F that are
already known, the procedure will have as known clauses v as
well as the two unit or binary clauses used to derive v in
. These immediately generate in the residual formula by
unit propagation using variable v, leading to a conflict at decision level zero,
thereby concluding the clause learning procedure and finishing the CL- proof. The bounds on the size of this
proof and the number of restarts needed immediately follow from the
definition of .
Combining Lemma 4 with Proposition 5, we get
Note that Baptista and Silva (2000) showed that CL together with restarts is complete. Our theorem makes a much stronger claim about a slight variation of CL, namely, with enough restarts, this variation can always find proofs that are as short as those of RES.