next up previous
Next: Acknowledgements Up: Quantum Computing and Phase Previous: Scaling

Discussion

In summary, we have introduced a quantum search algorithm and evaluated its average behavior on a range of small search problems. It appears to increase the amplitude into solution states exponentially compared to evaluating and measuring a quantum superposition of potential solutions directly. Moreover, this method exhibits the same transition behavior, with its associated concentration of hard problems, as seen with many classical search methods. It thus extends the range of methods to which this phenomenon applies. More importantly, this indicates the algorithm is effectively exploiting the same structure of search problems as, say, classical backtrack methods, to prune unproductive search directions. It is thus a major improvement over the simple applications of quantum computing to search problems that behave essentially the same as classical generate-and-test, a method that completely ignores the possibility of pruning and hence doesn't exhibit the phase transition.

The transition behavior is readily understood because problems near the transition point have many large partial goods that do not lead to solutions [53]. Thus there will be a relatively high proportion of paths through the lattice that appear good for quite a while but eventually give deadends. A choice of phases based on detecting nogoods will not be able to work on these paths until near the solution level and hence give less chance to cancel out or move amplitude to those paths that do in fact lead to solutions. Hence problems with many large partial goods are likely to prove relatively difficult for any quantum algorithms that operate by distinguishing goods from nogoods of various sizes.

There remain many open questions. In the algorithm, the division between a problem-independent mapping through the lattice and a simple problem-specific adjustment to phases allows for a range of policies for selecting the phases. It would be useful to understand the effect of different policies in the hope of improving the concentration of amplitude into solutions. For example, the use of phases has two distinct jobs: first, to keep amplitude moving up along good sets rather than diffusing out to nogoods, and second, when a deadend is reached (i.e., a good set that has no good supersets) to send the amplitude at this deadend to a promising region of the search space, possibly very far from where the deadend occurred. These goals, of keeping amplitude concentrated on the one hand and sending it away on the other, are to some extent contradictory. Thus it may prove worthwhile to consider different phase choice policies for these two situations. Furthermore, the mapping through the lattice is motivated by classical methods that incrementally build solutions by moving from sets to supersets in the lattice. Instead of using unitary maps at each step that are as close as possible to this classical behavior, other approaches could allow more significant spreading of the amplitude at intermediate levels in the lattice and only concentrate it into solutions in the last few steps. It may prove fruitful to consider another type of mapping based on local repair methods moving among neighbors of complete sets. In this case, sets are evaluated based on the number of constraints they violate so an appropriate phase selection policy could depend on this number, rather than just whether the set is inconsistent or not. These possibilities may also suggest new probabilistic classical algorithms that might be competitive with existing heuristic search methods.

As a new example of a search method exhibiting the transition behavior, this work raises the same issues as prior studies of this phenomenon. For instance, to what extent does this behavior apply to more realistic classes of problems, such as those with clustering inherent in situations involving localized interactions [26]. This will be difficult to check empirically due to the limitation to small problems that are feasible for a classical simulation of this algorithm. However the observation that this behavior persists for many classes of problems with other search methods suggests it will be widely applicable. It is also of interest to see if other phase transition phenomena appear in these quantum search algorithms, such as observed in optimization searches [7, 43, 55, 23]. There may also be transitions unique to quantum algorithms, for example in the required coherence time or sensitivity to environmental noise.

For the specific instances of the algorithm presented here, there are also some remaining issues. An important one is the cost of the mapping from one level to the next in terms of more basic operations that might be realized in hardware, although the simple structure of the matrices involved suggest this should not be too costly. The scaling behavior of the algorithm for larger cases is also of interest, which can perhaps be approached by examining the asymptotic nature of the matrix coefficients of Eqs. 17 and 19.

An important practical question is the physical implementation of quantum computers in general [1, 49, 9], and the requirements imposed by the algorithm described here. Any implementation of a quantum computer will need to deal with two important difficulties [34]. First, there will be defects in the construction of the device. Thus even if an ideal design exactly produces the desired mapping, occasional manufacturing defects and environmental noise will introduce errors. We thus need to understand the sensitivity of the algorithm's behavior to errors in the mappings. Here the main difficulty is likely to be in the problem-independent mapping from one level of the lattice to the next, since the choice of phases in the problem-specific part doesn't require high precision. In this context we should note that standard error correction methods cannot be used with quantum computers in light of the requirement that all operations are reversible. We also need to address the extent to which such errors can be minimized in the first place, thus placing less severe requirements on the algorithm. Particularly relevant in this respect is the possibility of drastically reducing defects in manufactured devices by atomically precise control of the hardware [16, 17, 42, 47]. There are also uniquely quantum mechanical approaches to controlling errors [5] based on partial measurements of the state. This work could substantially extend the range of ideal quantum algorithms that will be possible to implement.

The second major difficulty with constructing quantum computers is maintaining coherence of the superposition of states long enough to complete the computation. Environmental noise gradually couples to the state of the device, reducing the coherence and eventually limiting the time over which a superposition can perform useful computations [52, 8]. In effect, the coupling to the environment can be viewed as performing a measurement on the quantum system, destroying the superposition of states. This problem is particularly severe for proposed universal quantum computers that need to maintain superpositions for arbitrarily long times. In the method presented here, the number of steps is known in advance and could be implemented as a special purpose search device (for problems of a given size) rather than as a program running on a universal computer. Thus a given achievable coherence time would translate into a limit on feasible problem size. To the extent that this limit can be made larger than feasible for alternative classical search methods, the quantum search could be useful.

The open question of greatest theoretical interest is whether this algorithm or simple variants of it can concentrate amplitude into solutions sufficiently to give a polynomial, rather than exponential, decrease in the probability to find a solution of any NP search problem with small constraints. This is especially interesting since this class of problems includes many well-studied NP-complete problems such as graph coloring and propositional satisfiability. Even if this is not so in the worst case, it may be so on average for some classes of otherwise difficult real-world problems. While it is by no means clear to what extent quantum coherence provides more powerful computational behavior than classical machines, a recent proposal for rapid factoring [48] is an encouraging indication of its capabilities.

A more subtle question along these lines is how the average scaling behaves away from the transition region of hard problems. In particular, can such quantum algorithms expand the range of the polynomially scaling problems seen for highly underconstrained or overconstrained instances? If so, this would provide a class of problems of intermediate difficulty for which the quantum search is exponentially faster than classical methods, on average. This highlights the importance of broadening theoretical discussions of quantum algorithms to include typical or average behaviors in addition to worst case analyses. More generally, are there any differences in the phase transition behaviors or their location compared with the usual classical methods? These questions, involving the precise location of transition points, are not currently well understood even for classical search algorithms. Thus a comparison with the behavior of this quantum algorithm may help shed light on the nature of the various phase transitions that seem to be associated with the intrinsic structure of the search problems rather than with specific search algorithms.


next up previous
Next: Acknowledgements Up: Quantum Computing and Phase Previous: Scaling

Tad Hogg
Wed Mar 20 16:29:17 PST 1996