foundations of computational agents
In reinforcement learning, an agent tries to learn the optimal policy from its history of interaction with the environment. A history of an agent is a sequence of state–action–rewards:
which means that the agent was in state and did action , which resulted in it receiving reward and being in state ; then it did action , received reward , and ended up in state ; then it did action , received reward , and ended up in state ; and so on.
We treat this history of interaction as a sequence of experiences, where an experience is a tuple
which means that the agent was in state , it did action , it received reward , and it went into state . These experiences will be the data from which the agent can learn what to do. As in decision-theoretic planning, the aim is for the agent to maximize its value, which is usually the discounted reward.
Recall that , where is an action and is a state, is the expected value (cumulative discounted reward) of doing in state and then following the optimal policy.
Q-learning uses temporal differences to estimate the value of . In Q-learning, the agent maintains a table of , where is the set of states and is the set of actions. represents its current estimate of .
An experience provides one data point for the value of . The data point is that the agent received the future value of , where ; this is the actual current reward plus the discounted estimated future value. This new data point is called a return. The agent can use the temporal difference equation (13.1) to update its estimate for :
or, equivalently:
Figure 13.3 shows a Q-learning controller, where the agent is acting and learning at the same time. The do command, , on line 17 specifies that the action is the command the controller sends to the body. The reward and the resulting state are the percepts the controller receives from the body.
The algorithm of Figure 13.3 also maintains an array , which counts the number of times action was performed in state . The function computes from the count. often works well; see Exercise 13.6. When is fixed, the array does not need to be maintained (but it is also used for some exploration strategies; see below).
The Q-learner learns (an approximation of) the optimal -function as long as the agent explores enough, and there is no bound on the number of times it tries an action in any state (i.e., it does not always do the same subset of actions in a state).
Consider the two-state MDP of Example 12.29. The agent knows there are two states and two actions . It does not know the model and it learns from the experiences. With a discount, , , and initially 0, the following is a possible trace (to a few significant digits and with the states and actions abbreviated):
2 | ||||
0 | ||||
0 |
With fixed, the Q-values will approximate, but not converge to, the values obtained with value iteration in Example 12.33. The smaller is, the closer it will converge to the actual Q-values, but the slower it will converge.