Partial-Rule Management

This procedure (Figure 15) includes the generation of new partial rules and the removal of previously generated ones that proved to be useless.

In our implementation, we apply a heuristic that produces the generation of new partial rules when the reward prediction error exceeds $ \overline{e}$. In this way, we concentrate our efforts to improve the categorization on those situation with larger errors in the reward prediction.

Every time a wrong prediction is made, at most $ \tau$ new partial rules are generated by combination of pairs of rules included in the set $ C'_{ant}(a)$. Recall that this set includes the rules active in the previous time step and in accordance with the executed action $ a$. Thus, these are the rules related with the situation-action whose reward prediction we need to improve.

The combination of two partial rules $ w_{1} \oplus w_{2}$ consists of a new partial rule with a partial view that includes all the features included in the partial views of either $ w_{1}$ or $ w_{2}$ and with a partial command that includes all the elementary actions of the partial commands of either $ w_{1}$ or $ w_{2}$. In other words, the feature set of $ w_{1} \oplus w_{2}$ is the union of the feature sets in $ w_{1}$ and in $ w_{2}$ and the elementary actions in $ w_{1} \oplus w_{2}$ are the union of those in $ w_{1}$ and those in $ w_{2}$. Note that, since both $ w_{1}$ and $ w_{2}$ are in $ C'_{ant}(a)$, they have been simultaneously active and they are in accordance with the same action and, thus, they can not be incompatible (i.e., they can not include inconsistent features or elementary actions).

Figure 15: Partial Rule Management procedure. The value of $ q$ is calculated in the Statistics Update procedure and $ a$ is the last executed action.
\begin{figure}{\small \begin{center} \fbox{\parbox{18cm}{ \begin{tabbing}i :\= i... ... endwhile}\\ \par \>{\bf endif} \par \end{tabbing}}} \end{center}} \end{figure}

In the partial-rule creation, we bias our system to favor the combination of those rules ($ w_{i}$) whose reward prediction ($ q_{w_{i}}$) is closer to the observed one ($ q$). Finally, the generation of rules lexicographically equivalent to already existing ones is not allowed.

According to the categorizability assumption, only low-order partial rules are required to achieve the task at hand. For this reason, to improve efficiency, we limit the number of partial rules to a maximum of $ \mu$. However, our partial-rule generation procedure is always generating new rules (concentrating on those situations with larger error). Therefore, when we need to create new rules and there is no room for them, we must eliminate the less useful partial rules.

A partial rule can be removed if its reward prediction is too similar to some other rule in the same situations.

The similarity between two rules can be measured using the normalized degree of intersection between their reward distributions and the number of times both rules are used simultaneously:

$\displaystyle similarity(w,w')=\frac{\Vert I_w \cap I_{w'}\Vert}{\max\{\Vert I_w\Vert,\Vert I_{w'}\Vert\}} \frac{U(w \oplus w')}{\min\{U(w),U(w')\}},$    

where $ U(w)$ indicates the number of times rule $ w$ is actually used.

The similarity assessment for any pair of partial rules in the controller is too expensive and, in general, determining the similarity of each rule with respect to those from which it was generated (that are the rules we tried to refine when the new rule was created) is sufficient. Thus, based on the above similarity measure, we define the redundancy of a partial rule $ w=(w_{1} \oplus w_{2})$ as:

$\displaystyle redundancy(w)=\max\{similarity(w,w_1),similarity(w,w_2)\}.$    

Observe that with $ w=(w_{1} \oplus w_{2})$, we have that $ w \oplus w_{1}=w$ and $ U(w) \leq U(w_1)$. Therefore

$\displaystyle \frac{U(w \oplus w_1)}{\min\{U(w),U(w_1)\}}=\frac{U(w)}{\min\{U(w),U(w_1)\}}=\frac{U(w)}{U(w)}=1.$    

The same reasoning can be done with $ w_2$ and, consequently,

$\displaystyle redundancy(w)=\max\{\frac{\Vert I_w \cap I_{w_1}\Vert}{\max\{\Ver... ...frac{\Vert I_w \cap I_{w_2}\Vert}{\max\{\Vert I_w\Vert,\Vert I_{w_2}\Vert\}}\}.$    

When we need to create new rules but the maximum number of rules ($ \mu$) has been reached, the partial rules with a redundancy above a given threshold ($ \lambda$) are eliminated. Since the redundancy of a partial rule can only be estimated after observing it a number of times, the redundancy of the partial rules with low confidence indexes are set to 0, so that they are not immediately removed after creation.

Observe that, to compute the redundancy of a rule $ w$, we use the partial rules from which $ w$ was derived. For this reason, a rule $ w'$ cannot be removed from a controller $ C$ if there exists any rule $ w \in C$ such that $ w=w' \oplus w''$. Additionally, in this way we eliminate first the useless rules with higher order.

Josep M Porta 2005-02-01