next up previous
Next: Learning and Relevance Up: Boolean Satisfiability Previous: Boolean Satisfiability


Unit Propagation: The Inner Loop

Figure 1: Fraction of CPU time spent in unit propagation
\scalebox{0.50}{\resizebox{\textwidth}{!} {\includegraphics{up-frac}}}

When the DPLL algorithm 2.2 is implemented and run on practical problems, the bulk of the running time is spent in unit propagation. As an example, Figure 1 gives the amount of time spent by ZCHAFF on a variety of microprocessor test and verification examples made available by Velev (http://www.ece.cmu.edu/~mvelev).4 As the problems become more difficult, an increasing fraction of the computational resources are devoted to unit propagation. For this reason, much early work on improving the performance of DPLL focused on improving the speed of unit propagation.

Within the unit propagation procedure 2.3, the bulk of the time is spent identifying clauses that propagate; in other words, clauses that are not satisfied by the partial assignment and contain at most one unvalued literal:

Observation 2.4   Efficient implementations of DPLL typically spend the bulk of their effort searching for clauses satisfying the conditions required for unit propagation.

Before we go on to examine the techniques that have been used to speed unit propagation in practice, let us remark that other implementations of SAT solvers have similar properties. Nonsystematic solvers such as WSAT [Selman, Kautz, CohenSelman et al.1993], for example, spend the bulk of their time looking for clauses containing no satisfied or unvalued literals (or, equivalently, maintaining the data structures needed to make such search efficient). We can generalize Observation 2.4 to get:

Observation 2.5   Efficient implementations of SAT solvers typically spend the bulk of their effort searching for clauses satisfying specific syntactic conditions relative to a partial or complete truth assignment.

While the focus of our survey is on systematic methods, we remark that because of the similarity of the techniques used in DPLL and in WSAT, techniques that speed the inner loop of one are likely to speed the inner loop of the other as well.

That said, let us describe the series of ideas that have been employed in speeding the process of identifying clauses leading to unit propagations:

  1. After binding a variable $v$, examine each clause to determine whether or not it satisfies the conditions of Procedure 2.3.

  2. Slightly more sophisticated is to restrict the search for a suitable clause to those clauses $c\in C$ that include $\neg v$ as one of the disjuncts (assuming that $v$ has been assigned the value true). After all, if $v$ appears in $c$, $c$ is satisfied after $v$ is set to true; if $v$ is not mentioned in $c$ there can be no change in $c$'s ability to unit propagate when $v$'s value is set.

  3. When we set $v$ to true above, as we examine clauses containing $\neg v$, we have to walk each such clause to determine just which literals are satisfied (if any) and which are still unbound. It is more efficient to keep a record, for each clause $c$, of the number $s(c)$ of satisfied and the number $u(c)$ of unbound literals.

    In order to keep these counts current when we set a variable $v$ to true, we need to increment $s(c)$ and decrement $u(c)$ for each clause where $v$ appears, and to simply decrement $u(c)$ for each clause where $\neg v$ appears. If we backtrack and unset $v$, we need to reverse these adjustments.

    Compared to the previous approach, we need to examine four times as many clauses (those where $v$ appears with either sign, and both when $v$ is set and unset), but each examination takes constant time instead of time proportional to the clause length. If the average clause length is greater than four, this approach, due to Crawford and Auton Crawford:exp, will be more effective than its predecessor.

  4. Currently, the most efficient scheme for searching for unit propagations is the watched literals family of implementations [Moskewicz, Madigan, Zhao, Zhang, MalikMoskewicz et al.2001,Zhang StickelZhang Stickel2000]. For each clause $c$, we identify two ``watched'' literals $l_1(c)$ and $l_2(c)$; the basic idea is that as long as these two literals are both either unvalued or satisfied, the clause cannot produce a unit propagation.

    It is only when one of the watched literals is set to false that the clause must be examined in detail. If there is another unset (and unwatched) literal, we watch it. If there is a satisfied literal in the clause, we need do nothing. If there is no satisfied literal, the clause is ready to unit propagate.

    If the average clause length is $l$, then when we set a variable $v$ (say to true), the probability is approximately $2/l$ that we will need to analyze a clause in which $\neg v$ appears, and the work involved is proportional to the length of the clause. So the expected amount of work involved is twice the number of clauses in which $\neg v$ appears, an improvement on the previous methods. In fact, the approach is somewhat more efficient than this cursory analysis suggests because the adjustment of the watched literals tends to favor watching those that are set deep in the search tree.

Before we move on to discuss learning in Boolean satisfiability, let us remark briefly on the so-called ``pure literal'' rule. To understand the rule itself, suppose that we have a theory $T$ and some partial variable assignment $P$. Suppose also that while a literal $q$ appears in some of the clauses in $T$ that are not yet satisfied by $P$, the negation $\neg q$ does not appear in any such clauses. Now while $q$ may not be a consequence of the partial assignment $P$, we can clearly set $q$ to true without removing any solutions from the portion of the search space that remains.

The pure literal rule is not generally included in implementations of either the DPLL or the unit propagation procedure because it is relatively expensive to work with. Counts of the number of unsatisfied clauses containing variables and their negations must be maintained at all times, and checked to see if a literal has become pure. In addition, we will see in Section 2.3 that many branching heuristics obviate the need for the pure literal rule to be employed.


next up previous
Next: Learning and Relevance Up: Boolean Satisfiability Previous: Boolean Satisfiability
Matt Ginsberg 2004-02-19