Subsections


Experiments

We show the results of applying our learning algorithm to two robotics-like simulated problems: robot landmark-based navigation and legged robot walking. The first problem is simpler (although it includes more delayed reward) and we use it to clearly describe the workings of the algorithm. The second problem approaches a realistic robotic application, our objective in the long term. We use the two examples to compare the performance of our learning system with that of generalizing and non-generalizing reinforcement-learning algorithms. The confronted problems are different enough to show the generality of the proposed learning system.


Simulated Landmark-Based Navigation

Figure 1: Landscape for the simple landmark-based navigation task. The landscape is divided in areas (the dashed ovals) where the same subsets of landmarks are visible.
\includegraphics[width=0.75\linewidth]{images/figure_jair_1}

We confront a simple simulated landmark-based navigation task in the forest-like environment shown in Figure 1. The objective for the learner is to go from the start position (marked with a cross at the bottom of the figure) to the goal position where there is the food (marked with a cross at the top right corner of the environment). The agent can neither walk into the lake nor escape from the depicted terrain.

The agent can make use of some binary landmark (i.e., feature) detectors to identify its position in the environment and to decide which action to execute next. In the example, the landmark detectors of the agent are:

  1. Rock detector: Active when the rock is seen.
  2. Boat detector: Active when the boat is seen.
  3. Flower detector: Active when the bunch of flowers is seen.
  4. Tree detector: Active when the tree is seen.
  5. Bush detector: Active whenever a bush is seen.
  6. Water detector: Active when there is water nearby.
  7. Bird detector: Active when there is a bird flying over the agent.
  8. Cow detector: Active when there is a cow nearby.
  9. Sun detector: Active when the sun is shining.
  10. Cloud detector: Active when it is cloudy.

Of these detectors, only the first 5 are relevant for the task. The water detector is always active, and the rest of landmark detectors become active at random. These 10 landmark detectors can differentiate between $ 2^{10}=1024$ situations.

We simplify the problem by clustering the possible positions of the learner in the environment in 7 areas (shown in Figure 1): each area includes the positions from which the same set of relevant landmarks can be seen.

As far as actions is concerned, we use three actions for the West-East movement of the robot: move to the West (denoted as $ W$), stay in the same place ($ -$), move to the East ($ E$). The other three indicate movement along the North-South dimension (move to the North $ N$, stay in the same latitude $ -$, move to the South $ S$). These two independent groups of three actions can be combined giving rise to 9 different actions (move North-West, North, North-East, etc.). We assume that when the agent executes one of these actions, it does not stop until the nearest area of the terrain in the direction of the movement is reached. When the agent tries to move into the lake or out of the terrain, it remains in the same position it was. Figure 1 shows all the possible transitions between contiguous areas in the environment.

With the just described landmark detectors and elementary actions the maximum possible order of a given rule is 12, and we can define up to $ 944784$ ( $ 3^{10}4^{2}$) syntactically different partial rules. Only taking into account all the rules with one feature detector and one elementary action (that are the ones initially included in the controller) we have $ 90$ different partial rules.

The agent only receives reward (with value $ 100$) when it reaches the goal. Consequently, this is a problem with delayed reward since the agent must transmit the information provided by the reward signal to those actions and situations not directly related with the observation of reward.

The parameters of the partial-rule learning algorithm we used for this task were $ \gamma=0.9$, $ \beta=0.99$, $ \eta=5$, $ \alpha=0.1$, $ \tau=5$, $ \mu=200$ and, $ \lambda=0.95$ (see Appendix A for a detailed description of the parameters). Observe that, with a maximum number of partial rules $ \mu=200$ and an initial controller containing $ 90$ rules, little room is left for the generation of rules with order higher that 2.

Figure 2: Performance on the landmark-based navigation task. Results shown are the average over 10 runs.
\includegraphics[width=0.9\linewidth]{images/originals/figure_jair_2}
\includegraphics[width=0.4\linewidth]{images/figure_jair_2_legend}

The learning is organized in a sequence of trials. Each trial consists in placing the learner in the starting position and letting it move until the goal is reached, allowing the execution of at most 150 actions to reach the goal. When performing optimally, only three actions are required to reach the objective from the starting position.

Figure 2 shows that, after 40 learning trials, the agent approaches the optimal behavior (represented by the flat dashed line at $ y=3$).

The dashed line in Figure 2 is the performance of a XCS in this problem. To perform this test, we used the implementation of Wilson's XCS developed by [Butz, 1999]. To make XCS work in the same search space as the partial-rule algorithm, we modified the XCS implementation to be able to deal with non-binary actions. No other modification, but parameter adjustment, were introduced in the original code. The results presented here corresponds to the average of 10 runs using the set of parameters that gave a better result. Nominally, these parameters were: learning rate  $ \alpha=0.1$, decay rate  $ \gamma=0.9$, maximum number of classifiers $ \mu=200$ (however, the initial set is empty), the genetic algorithm is applied in average every 5 time steps, the deletion experience is 5, the subsume experience is 15, the fall off rate is 0.1, the minimum error 0.01, a prediction threshold of 0.5, the crossover probability is 0.8, the mutation probability 0.04 and the initial don't care probability $ 1/3$. The prediction and the fitness of new classifiers are initialized to 10 and the error to 0. A detailed explanation of the meaning of those parameters is provided by [Wilson, 1995] and also by the comments in the code of [Butz, 1999].

We can see that the XCS reaches the same performance of the partial-rule approach, but using about four times more trials. This difference in performance is partially explained by XCS's lack of generalization in the action space. However this factor is not that relevant in this case since the action space has only two dimensions. The main factor that explains the better performance of the partial-rule approach is the bias introduced by the categorizability assumption that is not present in the XCS system and that, in this case, allows a more efficient learning process. XCS in more powerful than our partial-rule approach in the sense that XCS makes no assumption about the categorizability of the environment, while we assume it as high. The result of the XCS learning process includes the identification of the degree of categorizability of the environment and in our case this is, in some sense, pre-defined. The generality of the XCS, however, produces a slower learning process.

If we initialize the classifiers in a XCS with a high don't care probability and we initialize the rules in the partial-rule algorithm so that no generalization is used in the action space (i.e., if all rules include a command for each motor), then the two systems become closer. In this case, the main (but not the only) difference between the two approaches is the assumption on the relation between the inputs and the value: While XCS assumes a linear relation, we assume the environment to be categorizable, or, what is the same, we assume the value to depend on only few of the inputs. Due to this difference, when confronted to the same problem, the two systems would learn the same policy and the same values for each action, but the values would be computed using different rules with different associated values, and this is so independently of the parameter/rule initialization used in each case. The system with a smaller learning time would be that with an assumption closer to the reality. The results obtained in the particular example presented above show that the categorizability assumption is more valid and our hypothesis is that this would be the case in most robotics-like applications.


Table 1: Partial execution trace for the landmark-based navigation task. Elementary action $ -$ means no movement along the corresponding dimension. At each time step $ t$, the action with the highest guess is executed. After time step 3, the goal is reached.
$ t$ Position $ V$ Action Winner Rule $ q_w$ $ e_w$ Guess
1 A2 81 $ (W,N)$ $ w_1=(v(Rock, \neg Boat),c(W,N))$ $ 80.63$ $ 1.16$ $ 79.87$
      $ (W,-)$ $ w_2=(v(Rock, Water),c(W))$ $ 79.73$ $ 2.19$ $ 77.65$
2 A4 90 $ (-,N)$ $ w_3=(v(Boat, Tree),c(N))$ $ 89.61$ $ 2.04$ $ 88.88$
      $ (E,N)$ $ w_4=(v(Tree),c(E,N))$ $ 90.0$ $ 0.0$ $ 89.86$
      $ (E,-)$ $ w_5=(v(\neg Rock, Boat),c(E))$ $ 86.71$ $ 4.58$ $ 79.56$
3 A6 100 $ (E,-)$ $ w_6=(v(\neg Bush),c(E,-))$ $ 100.0$ $ 0.0$ $ 99.87$


Table 1 shows the evaluation of some actions in the different situations the agent encounters on its path from the start to the goal after 50 learning trials. Analyzing this trace, we can extract some insight about how the partial-rule learning algorithm works.

For instance, at time step 1, we see that rule $ w_2=(v(Rock, Water),c(W))$ is used to determine the value of action $ (W,-)$. Since the landmark detector Water is always active, this rule is equivalent to $ w=(v(Rock),c(W))$, which is one of the rules used to generate $ w_2$. If we examine the statistics for $ w$ we find that $ q_{w}=74.70$ and $ e_{w}=15.02$. Obviously, the value distributions of $ q_w$ and $ q_{w_2}$ look different (74.70 vs. 79.73 and 15.02 vs. 2.19). This is because $ w_2$ has been generated on later stages of the learning and, thus, its statistics have been updated using a subsample of the values used to adjusts the statistics of $ w$. In this particular case, $ q_w$ has been updated 250 times while $ q_{w_2}$ has been updated only 27 times. As learning continues, both distributions will become more similar and rule $ w_2$ will be eventually eliminated.

In Table 1, we can see that sometimes there are some non-optimal actions that get an evaluation close to the optimal ones. For this reason, the agent executes, some times, non-optimal actions and this increases the number of steps necessary to reach the goal. In general, the adjustment of the statistics of the rules can solve this problem but, in this particular case, we need to create new rules to fix the situation. For instance, at time step 2, when the value for rule $ w_4$ is increased towards 90, the value of rules active at this time step and proposing actions in accordance with the action of rule $ w_4$ also converge toward 90. So, in the long term, a rule proposing just action $ (N)$ can get a value close to 90. In the absence of more specific rules, this rule can be used to estimate the value of an action such as $ (-,N)$ and, due to the probabilistic nature of the action selection procedure, this action can, eventually, be executed delaying the agent from reaching the goal by 1 time step. However, the execution of $ (-,N)$ results in an error in the value prediction and, thus, in the creation of new rules to better characterize the situation. As soon as a specific rule for action $ (-,N)$ is generated, the error is no longer repeated.

At time step 3, we see that rule $ w_6=(v(\neg Bush),c(E,-))$ has a value of 100 with error 0 but the guess for this rule is $ 99.87$. This is because the maximum confidence ($ \beta $) is lower than $ 1.0$ ($ 0.99$ in this case) and this makes the agent to keep always a certain degree of exploration.

If the agent only receives reward when the task is totally achieved, the function value for each situation can be computed as $ V(s)=\gamma^{n-1}r$ with $ n$ the distance (in actions) from situation $ s$ to the target one and $ r$ the reward finally obtained. In Table 1, we can see that situations get the correct evaluation: $ 80.63 (\sim 81 = 100 \cdot 0.9^2)$ for $ A2$, $ 90 (= 100 \cdot 0.9)$ for $ A4$, and $ 100$ for $ A6$.

Observe that this problem can be solved using only 200 partial rules out of the $ 9216$ possible situation-action combinations in this domain. So, we can say that the problem is certainly categorizable. The main conclusion we can extract from this toy example is that, in a particular case in which the confronted problem was categorizable, the presented algorithm has been able to determine the relevant rules and to adjust their values (including the effect of the delayed reward) so that the optimal action can be determined for each situation.

Gait Generation for a Six-Legged Robot

We also applied our algorithm to the task of learning to generate an appropriate gait (i.e., the sequence of steps) for a six-legged robot (Figure 3). To apply the learning algorithm to the real robot would be possible, but dangerous: in the initial phases of the learning the robot would fall down many times damaging the motors. For this reason we used a simulator during the learning and, afterward, we applied the learned policy to the real robot.

The problem of learning to walk with a six legged robot has been chosen by many authors before as a paradigmatic robotic-learning problem. For instance, [Maes and Brooks, 1990] implemented a specific method based on immediate reward to derive the preconditions for each leg to perform the step. [Pendrith and Ryan, 1996] used a simplified version of the six-legged walking problem to test an algorithm able to deal with Non-Markovian spaces of states and [Kirchner, 1998] presented a hierarchical version of Q-learning to learn the low-level movements of each leg, as well as a coordination scheme between the low-level learned behaviors. [Ilg et al., 1997] introduced a learning architecture based on self-organizing neural networks, and [Kodjabachia and Meyer, 1998] proposed an evolutionary strategy to develop a neural network to control the gait of the robot. [Vallejo and Ramos, 2000] used a parallel genetic algorithm architecture and [Parker, 2000] described an evolutionary computation where the robot executes the best controller found up to a given moment while a new optimal controller is computed in an off-line simulation. All these algorithms are usually tested on flat terrain with the aim of generating periodic gaits (i.e., gaits where the sequence of steps is repeated cyclically). However, for general locomotion (turns, irregular terrain, etc) the problem of free gait generation needs to be considered.

Figure 3: The Genghis II walking robot and its 2D simulation environment.
\includegraphics[width=0.9\linewidth]{images/figure_jair_3}

Our simulator (see Figure 3) allows the controller to command each leg of the robot in two independent degrees of freedom (horizontal and vertical) and it is able to detect when the robot is in an unstable position (in our robot this happens when two neighboring legs are in the air simultaneously). Using this simulator, we implemented the behaviors described by [Celaya and Porta, 1996] except those in charge of the gait generation. Therefore, the task to be learned consists in deciding at every moment which legs must step (that is, leave the ground and move to an advanced position), and which must descend or stay on the ground to support and propel the body.

We defined a set of 12 feature detectors that, due to our experience on legged robots, we knew could be useful in different situations for the gait-generation task:

Attending to the activation and non-activation of these 12 feature detectors, we can differentiate between 4096 different situations.

On the action side, we work with two different elementary actions per leg: one that issues the step of the leg and another that descends the leg until it touches the ground. Thus, the cardinality of the set of elementary actions is 12 and, at each time step, the robot issues an action containing 6 elementary elements (one per leg). Thus, we can think of each leg as a virtual motor that accepts two possible values, 0 to remain in contact with the ground and 1 to perform the step.

The reward signal includes two aspects:

The most efficient stable gait is the tripod gait in which two sets of three non-adjacent legs step alternately. Using this gait, the robot would obtain a reward of 0 (when one group of three legs are lifted and advanced) followed by a reward of 50 (when the legs in contact with the ground move backward as a reaction to the advance of legs moved in the previous time step). Thus, the optimal average reward is 25.

In the experiments, the robot is set in an initial posture with all the legs in contact with the ground but in a random advance position.

Figure 4 shows results of applying the partial-rule algorithm compared with those obtained using standard Q-learning with 4096 distinct states and 64 different actions.

For the partial-rule algorithm, we used the following set of parameters: $ \gamma=0.2$, $ \beta=0.99$, $ \eta=22$, $ \alpha=0.1$, $ \tau=150$, $ \mu=10000$ and, $ \lambda=0.95$ (see Appendix A for a description of these parameters). For Q-learning, the learning rate is set to $ \alpha=0.5$ and we use an action selection rule that performs exploratory actions with probability $ 0.1$.

Figure 4: Performance of the partial-rule approach compared with standard Q-learning. Results are the smoothed average of 10 experiments.
\includegraphics[width=0.8\linewidth]{images/originals/figure_jair_4}
\includegraphics[width=0.5\linewidth]{images/figure_jair_4_legend}

Figure 5: Performance of the Q-learning algorithm with different exploration rates. The reference values 19 and 24 are the upper bound of the performance attainable when using exploration rate 0.1 and 0.01.
\includegraphics[width=0.8\linewidth]{images/originals/figure_jair_5}
\includegraphics[width=0.75\linewidth]{images/figure_jair_5_legend}

Figure 6: Performance of our algorithm compared with Q-learning when there are irrelevant features.
\includegraphics[width=0.9\linewidth]{images/originals/figure_jair_6}
\includegraphics[width=0.5\linewidth]{images/figure_jair_6_legend}

In Figure 4, we can see that the stability subproblem (i.e., not falling down, which corresponds to getting a reward greater than zero) is learned very quickly. This is because, in the stability subproblem, we can take advantage of the generalization provided by using separate elementary actions and, with a single rule, we can avoid executing several dangerous actions. However, the advance subproblem (i.e., getting a reward close to 25) is learned slowly. This is because little generalization is possible and the learning system must generate very specific rules. In other words, this sub-problem is less categorizable than the stability one.

As in the landmark-based navigation example discussed in the previous section, we observe that the controller contains some (slightly) overly general rules that are responsible for the non optimal performance of the robot. However, we don't regard this as a problem since we are more interested in efficiently learning a correct enough policy for the most frequent situations than in finding optimal behaviors for all particular cases.

Figure 7: Performance with and without the partial-rule generation procedure.
\includegraphics[width=0.9\linewidth]{images/originals/figure_jair_7}
\includegraphics[width=0.6\linewidth]{images/figure_jair_7_legend}

Figure 5 shows the performance of Q-learning over a longer run using different exploration rates. This shows that Q-learning can eventually converge to an optimal policy but with many more iterations than our approach (about a factor of 10). Observe that a lower exploration rate allows the algorithm to achieve higher performance (around 19 with a learning rate of 0.1 and around 24 with learning rate 0.01) but using a longer period. With a careful adjustment of the exploration rate we can combine an initial faster learning with a better convergence in the long term. Experiments with Q-learning using learning rates other than 0.5 showed insignificant differences compared to the results shown here.

The advantage of our algorithm over non-generalizing ones is increased in problems in which some of the sensors provide information not related to the task. To test this point, we set up an experiment in which 6 feature detectors that become active randomly were added to the 12 initial ones. With these new features, the number of possible combinations of feature activations increases, and so does the number of states considered by Q-learning. Figure 6 shows the comparison between our algorithm and Q-learning for this problem. Q-learning is not able to learn a reasonable gait strategy in the 5000 time steps shown in the figure, while the performance of the partial-rule algorithm is almost the same as before. This means that the partial-rule algorithm is able to detect which sets of features are relevant and use them effectively to determine the robot's behavior. It is remarkable that, in this case, the ratio of memory used by our algorithm with respect to that used by non-generalizing algorithms is below 0.2%. This exemplifies how the performance of the non-generalizing algorithms degrades as the number of features increases, while this is not necessarily the case using the partial-rule approach.

The importance of the generation of partial rules in the improvement of the categorization can be seen comparing the results obtained for the same problem with and without this mechanism (Figure 7). The results show that the task cannot be learned using only partial rules of order 2. The only aspect of the gait-generation problem that can be learned with rules of order 2 is to avoid lifting a leg if one of its neighboring legs is already in the air. For instance, a rule such as

$\displaystyle v($In$\displaystyle (1)) \rightarrow c($Step$\displaystyle (2)),$    

forecasts a highly relevant negative reward and this prevents leg 2 from being raised when leg 1 is in the air.

Rules with order higher than 2 (i.e., not provided to the robot in the initial controller) are necessary, for instance, to avoid raising two neighboring legs simultaneously. A rule like

$\displaystyle v(\neg$   In$\displaystyle (1)) \rightarrow c($Step$\displaystyle (1),$Step$\displaystyle (2))$    

becomes active when the robot evaluates any action that implies raising leg 1 and leg 2 at the same time. Since the value prediction of this rule is very negative and its relevance is high, the action under evaluation would be discarded, preventing the robot from falling down. Similar rules have to be generated for each pair of neighboring legs. To make the robot advance, we need to generate rules with even higher order.

Figure 8: Performance of the partial-rule approach when learning is started from a correct rule set compared with the standard approach where rules are also learned.
\includegraphics[width=0.8\linewidth]{images/originals/figure_jair_B}
\includegraphics[width=0.7\linewidth]{images/figure_jair_B_legend}

In Figure 8, we can see the performance of the algorithm when we start the learning process from a correct rule set (i.e., a rule set learned in a previous experiment), but with all the statistics initialized to 0. In this experiment, we can compare the complexity of learning only the values for the rules compared with the complexity of learning the rules and their value at the same time. We can see that when only the values for the rules need to be learned the learning process is about two times faster than in the normal application of the algorithm.

Figure 9: Performance of the partial-rule approach when learning a free gait.
\includegraphics[width=0.9\linewidth]{images/originals/figure_jair_8}
\includegraphics[width=0.5\linewidth]{images/figure_jair_8_legend}

In a final experiment, we issue frequent changes in the heading direction of the robot (generated randomly every 10 time steps). In this way, periodic gaits become suboptimal and the controller should produce a free gait, i.e., a gait that includes a sequence of steps without any periodic repetition.

In this case, we focus on the advance subproblem and, thus, we introduced some hand-crafted rules to the initial controller to prevent the robot from falling down. These rules are of the form:

if leg $ i$ is lifted then execution of action $ a$ results in value $ -50$ with confidence 1,
where $ a$ is any of the actions that lift one of the two legs that are contiguous to $ i$.

The set of parameters we used in this case was: $ \gamma=0.2$, $ \beta=0.99$, $ \eta=5$, $ \alpha=0.1$, $ \tau=150$, $ \mu=10000$ and, $ \lambda=0.95$.

Figure 10: The hand-programmed gait strategy (top sequence) vs. a learned one (bottom sequence). The advance position of the robot at each snapshot is indicated below each picture.
\includegraphics[width=0.8\linewidth]{images/figure_jair_9}

Figure 9 shows the average results obtained using the partial-rule learning algorithm compared with those obtained with our best hand-coded gait-generation strategy. In the figure, the horizontal dashed line shows the average performance using the best gait-generation strategy we have implemented [Celaya and Porta, 1998]. It can be seen that the learned gait-generation strategy (the increasing continuous line) produces a performance similar to that of our best hand-coded strategy and that, in some cases, it even outperforms it. Figure 10 shows a situation where a learned controller produces a better behavior than our hand coded one. Using the hand-coded strategy, the robot starts to walk raising two legs (3 and 6) and, in few time steps it reaches a state from which the tripod gait is generated. Initially, leg 2 is more advanced than legs 1 and 4 and, in general, it is suboptimal to execute a step with a leg when its neighboring legs are less advances that itself. In this particular case however, this general rule does not hold. The learned strategy detects this exception and generates the tripod gait from the very beginning resulting in a larger advance of the robot in the initial stages of the movement.

Josep M Porta 2005-02-17