next up previous
Next: Proof Complexity Up: Boolean Satisfiability Previous: Hybrid Approaches


Branching Heuristics

Let us now turn to an examination of the progress that has been made in the development of effective heuristics for the selection of branch variables in DPLL. As discussed in the introduction, we focus not on specific heuristics that work in selected domains, but on general ideas that attempt to exploit the structure of the axiomatization directly.

Prior to the development of successful learning techniques, branching heuristics were the primary method used to reduce the size of the search space. It seems likely, therefore, that the role of branching heuristics is likely to change significantly for algorithms that prune the search space using learning. While the heuristics used in ZCHAFF appear to be a first step in this direction, very little is known about this new role.

Initially, however, the primary criterion for the selection of a branch variable was to pick one that would enable a cascade of unit propagations; the result of such a cascade is a smaller and more tractable subproblem.9

The first rule based on this idea is called the MOMS rule, which branches on the variable that has Maximum Occurrences in clauses of Minimum Size [Crawford AutonCrawford Auton1996,Dubois, Andre, Boufkhad, CarlierDubois et al.1993,Hooker VinayHooker Vinay1995,Jeroslow WangJeroslow Wang1990,PretolaniPretolani1993]. MOMS provides a rough but easily computed approximation to the number of unit propagations that a particular variable assignment might cause.

Alternatively, one can use a ``unit propagation rule'' [Crawford AutonCrawford Auton1996,FreemanFreeman1995], and compute the exact number of propagations that would be caused by a branching choice. Given a branching candidate $v_{i}$, the variable is separately fixed to true and to false and the unit propagation procedure is executed for each choice. The precise number of unit propagations caused is then used to evaluate possible branching choices. Unlike the MOMS heuristic, this rule is obviously exact in its attempt to judge the number of unit propagations caused by a potential variable assignment. Unfortunately, it is also considerably more expensive to compute because of the expense of unit propagation itself. This led to the adoption of composite approaches [Li AnbulaganLi Anbulagan1997] where MOMS is used to identify a small number of branching candidates, each of which is then evaluated exactly using the more expensive unit propagation heuristic. On randomly generated problems, the composite technique outperforms either heuristic in isolation.

Another strategy is to branch on variables that are likely to be backbone variables [Dubois DequenDubois Dequen2001]. A backbone literal (also often referred to as a unary prime implicate of a theory) is one that must be true in all solutions to a given problem. Given a problem $C$ and a partial assignment $P$, the backbone heuristic attempts to branch on variables that are backbones of the subset of those clauses in $C$ that are satisfied by $P$; the likelihood that any particular variable is a backbone literal is approximated by counting the appearances of that literal in the satisfied clauses in $C$. This heuristic outperforms those discussed in the previous paragraphs.

The heuristics described thus far were developed when the community's research emphasis was focused on the solution of randomly generated satisfiability problems. The development of bounded learning methods enabled solvers to address the issue of thrashing, and caused a natural shift in focus toward more structured, realistic problems. There are no formal studies comparing the previously discussed heuristics on structured problems, and the value of the studies that do exist is reduced because all of the implementations were of DPLL in isolation and without the learning techniques that have since proved so important. Branching techniques and learning are deeply related, and the addition of learning to a DPLL implementation will have a significant effect on the effectiveness of any of these branching strategies. As new clauses are learned, the number of unit propagations an assignment will cause can be expected to vary; the reverse is also true in that the choice of branch variable can affect which clauses the algorithm learns. A formal comparison of branching techniques' performance on structured problems and in the presence of learning would be extremely useful.

Branching heuristics that are designed to function well in the context of a learning algorithm generally try to branch on variables about which things have been learned recently. This tends to allow the implementation to keep ``making progress'' on a single section of the search space as opposed to moving from one area to another; an additional benefit is that existing nogoods tend to remain relevant, avoiding the inefficiencies associated with losing the information present in nogoods that become irrelevant and are deleted. In ZCHAFF, for example, a count is maintained of the number of times each literal occurs in the theory being solved. When a new clause is added, the count associated with each literal in the clause is incremented. The branch heuristic then selects a variable that appears in as many clauses as possible. By periodically dividing all of the counts by a constant factor, a bias is introduced toward branching on variables that appear in recently learned clauses. Like the MOMS rule, this rule is inexpensive to calculate.

The heuristic used in BERKMIN [Goldberg NovikovGoldberg Novikov2002] builds on this idea but responds more dynamically to recently learned clauses. The BERKMIN heuristic prefers to branch on variables that are unvalued in the most recently learned clause that is not yet satisfied, with a ZCHAFF-like heuristic used to break ties.

All told, there are many competing branching heuristics for satisfiability solvers, and there is still much to be done in evaluating their relative effectiveness. The most interesting experiments will be done using implementations that learn, and on realistic, structured problems as opposed to randomly generated ones.


next up previous
Next: Proof Complexity Up: Boolean Satisfiability Previous: Hybrid Approaches
Matt Ginsberg 2004-02-19