Computation Time next up previous
Next: Conclusion Up: Experimental Comparison of Flaw Previous: The Trains and Get-Paid  

Computation Time

 

We have now covered the key questions we set out to address: what are the relative effects of alternative search-control strategies on search-space size, and, in particular, how can we reconcile the apparently conflicting approaches of LCFR and ZLIFO? We concluded that LCFR-DSep combines the main advantages for reducing search-space size of these two strategies, namely LCFR's use of a least-cost selection mechanism, at least for forced flaws, with ZLIFO's use of separable-threat delay. A final question concerns the price one has to pay to use LCFR-DSep--or for that matter, any of the alternative strategies. To achieve a reduction in search-space size, is it necessary to spend vastly more time in processing? Or do these strategies pay for themselves?

To answer these questions, we collected timing data on all our experiments. Figures gif and gif gives this data for the basic problems, for both the experiments run with the node limit and those run with the time limit. (As detailed in Appendix A, the results for the experiments with the node limit and the time limit were very similar.) Because we saw little influence of precondition ordering on the basic problems, we analyze only the data for the default precondition ordering. We show one graph with all the strategies, and another that includes only the ``leading strategies'', to make it possible to see the distinctions among them.

  

Figure: Basic Problems: Aggregate Computation Time Performance for Leading Strategies

  

Figure: Basic Problems: Aggregate Computation Time Performance

The timing data show that LCFR-DSep does, by and large, pay for its own overhead on the basic problems by generating smaller search spaces (and therefore having to process fewer nodes). When run with a time limit, LCFR-DSep's time performance is almost identical with ZLIFO's, despite the fact that repair cost computations are more expensive than the stack-popping of a LIFO strategy. When run with a node limit, LCFR-DSep does show worse time performance than ZLIFO in aggregate, but still performs markedly better than most of the other strategies. The change in relative performance results from the cases in which both strategies fail at the node limit: LCFR-DSep takes longer to generate 10,000 nodes.

Another interesting observation is that DSep-LC has the best time performance of all on the basic problem set. This should perhaps not be a surprise, because DSep-LC closely approximates LCFR-DSep. It differs primarily in its preference for nonseparable threats, which in any case will tend to have low repair costs. Whenever a node includes a nonseparable threat, DSep-LC can quickly select that threat, without having to compute repair costs. This speed advantage outweighs the cost of processing the extra nodes it sometimes generates.

Figures gif-gif provide the timing data for the Trains and Tileworld domains.gif Here there are no real surprises. The computation times taken parallel quite closely the size of the search spaces generated. The strategies that generate the smallest search spaces are also the fastest. With the Trains problems, we again see the DSep-LC can serve as a good approximation technique for LCFR-DSep. Although it generates more nodes than LCFR-DSep, it is somewhat faster.gif

  

Figure: Tileworld Problems: Aggregate Computation Time Performance for Leading Strategies

  

Figure: Tileworld Problems: Aggregate Computation Time Performance for Leading Strategies with Reversed Precondition Insertion

  

Figure: Trains Problems: Aggregate Computation Time Performance for Leading Strategies

  

Figure: Trains Problems: Aggregate Computation Time Performance for Leading Strategies with Reversed Precondition Insertion


next up previous
Next: Conclusion Up: Experimental Comparison of Flaw Previous: The Trains and Get-Paid

TOM Conversion
Wed Jun 25 17:40:58 EDT 1997