Reinforcement Learning in Continuous State and Action Spaces: A Brief Note

Thanks Hado van Hasselt for the great work.

Introduction

In the problems of sequential decision making in continuous domains with delayed reward signals, the main purpose for the algorithms is to learn how to choose actions from an infinitely large action space to optimize a noisy delayed cumulative reward signal in an infinitely large state space, where even the outcome of a single action can be stochastic.

Here we assume that a model of environment is not known. Analytically computing a good policy from a continuous model can be infeasible, we focus on methods that explicitly update a representation of a value function, a policy or both here. In addition, we focus mainly on the problem of control, which means we want to find action-selection polices that yield high returns, as opposed to the problem of prediction, which aims to estimate the value of a given policy.

Some possible valuable books are given below: (we suppose that you are familiar with Sutton)

  1. Szepesvari, C.: Algorithms for reinforcement learning. Synthesis Lectures on Artificial Intelligence and Machine Learning 4(1), 1-103(2010).
  2. Busoniu, L., Babuska, R., De Schutter, B., Ernst, D.: Reinforcement learning and dynamic programming using function approximators. CRC Press, Boca Raton (2010).

Function Approximation

In this section, we mostly limit ourselves to the general functional form of the approximators and general methods to update the parameters. Non-parametric approaches, such as kernel-based methods are not included. Turn to the following references for more details if you are interested in kernel based methods:

  1. Ormoneit, D., Sen, S.: Kernel-based reinforcement learning. Machine Learning 49(2), 161-178 (2002).

Linear Function Approximation

A linear function is a simple parametric function that depends on the feature vector. For instance, consider a value-approximation algorithm where the value function is approximated by:

Vt(s)=θTtϕ(s)

A common method to find features for a linear function approximator divides the continuous state space into separate segments and attaches one feature to each segment, ex. tile coding. However, one potential problem with discretizing methods such as tile coding is that the resulting function that maps state into features is not injective, i.e. ϕ(s)=ϕ′(s) does not imply that s=s′. Another issue is related to the step-size parameter that many algorithms use. In general, it is often a good idea to make sure that |ϕ(s)|=∑DΦkϕk(s)≤1 for all s. A final issue is that it introduces discontinuities in the function.

Some of the issues mentioned above can be tackled by suing so-called fuzzy sets. A fuzzy set is a generalization of normal sets to fuzzy membership. A state may belong partially to the set defined by feature ϕi and partially to the set defined by ϕj. An advantage of this view is that it is quite natural to assume that ∑kϕk≤1, since each part of an element can belong to only one set. It is possible to define the sets such that each combination of feature activations corresponds precisely to one single state, thereby avoiding the partial-observability problem sketched earlier. A common choice is to use triangular functions that are equal to one at the center of the corresponding feature and decay linearly to zero for states further from the center. A drawback of fuzzy sets is that these sets still need to be defined beforehand, which may be difficult.

Non-linear Function Approximation

In a parametric non-linear function approximator, the function that should be optimized is represented by some predetermined parameterized function. For instance, for value-based algorithms we may have:

Vt(s)=V(ϕ(s),θt)

In general, a non-linear function approximator may approximate an unknown function with better accuracy than a linear function approximator that uses the same input features. A drawback of non-linear function approximation is that less convergence guarantees can be given.

Updating Parameters

Gradient Descent

A gradient descent update follows the direction of the negative gradient of some parameterized function that we want to minimize. The gradient of a parameterized function is a vector in parameter space that points in the direction in which the function decreases. Because the gradient only describes the local shape of the function, this algorithm can end up in a local minimum. For instance, using an error measure such as temporal-difference or a prediction error, i.e.

E(st,at,θt)=(R(st,at,θt)−rt+1)2

Update parameter:

θt+1=θt−αt∇θE(x,θt)

If the parameter space is a curved space, it is more appropriate to use dθTGdθ where G is a P×P positive semi-definite matrix. Wit this weighted distance metric, the direction of steepest descent becomes

∇~θE(x,θ)=G−1∇θE(x,θ)

which is known as the natural gradient. In general, the best choice for matrix G depends on the functional form of E.

Gradient-Free Optimization

Gradient free methods are useful when the function that is optimized is not differentiable or when it is expected that many local optima exist. There are many general global methods for optimization exist, including evolutionary algorithms. Details about evolutionary algorithms are beyond the scope of this note.

Approximate Reinforcement Learning

Value Approximation

In value-approximation algorithms, experience samples are used to update a value function that gives an approximation of the current or the optimal policy. Many reinforcement learning algorithms fall into this category. Important differences between algorithms within this category is whether they are on-policy or off-policy and whether they update online or offline. Online algorithms are sometimes more sample-efficient in control problems.

In order to update a value with gradient descent, we must choose some measure of error that we can minimize. This measure is often referred to as objective function. We generalize standard temporal-difference learning to a gradient update on the parameters of a function approximator. The tabular TD-Learning update is

Vt+1(st)=Vt(st)+αt(st)δt

with

δt=rt+1+γVt(st+1)−Vt(st)
and αt(s)∈[0,1] is a step-size parameter. When the state values are stored in a table, TD-learning can be interpreted as a stochastic gradient-descent update on the one-step temporal-difference error:

E(st)=12(rt+1+γVt(st+1)−Vt(st))2=12δ2t

However, if Vt is a parametrized function s.t. Vt(s)=V(s,θt), the negative gradient with respect to the parameters is given by

−∇θE(st,θ)=−(rt+1+γVt(st+1)−Vt(st))∇θ(rt+1+γVt(st+1)−Vt(st))

Anyway, we can interpret rt+1+γVt(st+1) as a stochastic approximation for Vπ that does not depend on θ. Then

−∇θE(st,θ)=−(rt+1+γVt(st+1)−Vt(st))∇θVt(st)

This implies the parameters can be updated as

θt+1=θt+αtδt∇θVt(st)

Similarly, updates for action-value algorithms are

δt=rt+1+γQt(st+1,at+1)−Qt(st,at)
or

δt=rt+1+γmaxaQt(st+1,a)−Qt(st,at)
for SARSA and Q-Learning respectively. We can also incorporate accumulating eligibility traces with trace parameter λ with the following two equations:

et+1θt+1=λγet+∇θVt(st)=θt+αtδtet+1

Parameters updated with equations above may diverge when off-policy updates are used. This holds for any temporal-difference method with λ<1. In other words, if we sample transitions from a distribution that does not comply completely to the state-visit probabilities that would occur under the estimation policy, the parameters of the function may diverge. This is unfortunate, because in the control setting ultimately we want to learn about the unknown optimal policy. Recently, a class of algorithms has been proposed to deal with this issue:

  1. Maei, H.R., Sutton, R.S.: GQ (λ ): A general gradient algorithm for temporal-difference prediction learning with eligibility traces. In: Proceedings of the Third Conference On Artificial General Intelligence (AGI-2010), pp. 91–96. Atlantis Press, Lugano (2010).
  2. Sutton, R.S., Maei, H.R., Precup, D., Bhatnagar, S., Silver, D., Szepesv´ari, C., Wiewiora, E.: Fast gradient-descent methods for temporal-difference learning with linear function approximation. In: Proceedings of the 26th Annual International Conference on Machine Learning (ICML 2009), pp. 993–1000. ACM (2009).

It is nontrivial to extend the standard online temporal difference algorithms to continuous action spaces. The algorithms in the next section are usually much better suited for use in problems with continuous actions.

Policy Approximation

If the action space is continuous finding the greedy action in each state can be nontrivial and time-consuming.

Policy Gradient Algorithms

The idea of policy-gradient algorithms is to update the policy with gradient ascent on the cumulative expected value Vπ. If the gradient is known, we can update the policy parameters with

ψk+1=ψk+βk∇ψE(Vπ(st))=ψk+βk∇ψ∫s∈SP(st=s)Vπ(s)ds

As a practical alternative, we can use stochastic gradient descent:

ψt+1=ψt+βt(st)∇ψVπ(st)

Such procedures can be at best hope to find a local optimum, because they use a gradient of a value function that is usually not convex with respect to the policy parameters. Define the trajectory as T which is a sequence of states and actions. Then

∇ψVπ(s)=∫T∇ψP(T|s,ψ)E{∑t=0∞γtrt+1∣∣∣T}dT=∫TP(T|s,ψ)∇ψlogP(T|s,ψ)E{∑t=0∞γtrt+1∣∣∣T}dT=E{∇ψlogP(T|s,ψ)E{∑t=0∞γtrt+1∣∣∣T}∣∣∣∣s,ψ}

Moreover, since only the policy term depend on ψ, then

∇ψlogP(T|s,ψ)=∑t=0∞∇ψlogπ(st,at,ψ)

This only holds if the policy is stochastic. In most cases, this is not a big problem, for stochastic polices are needed anyway to ensure sufficient exploration.

There are two examples of stochastic polices for policy gradient algorithms:

  1. Boltzmann Exploration
  2. Gaussian Exploration

We need to sample the expected cumulative discounted reward to get the gradient. For instance, if the task is episodic we can take a Monte Carlo sample that gives the cumulative (possibly discounted) reward for each episode:

∇ψVπ(s)=E⎧⎩⎨Rk(st)⎛⎝∑j=tTk−1∇ψlogπ(sj,aj,ψ)⎞⎠⎫⎭⎬
where Rk=∑Tk−1j=tγt−jrj+1 is the total discounted return obtained after reaching state st in episode k.

Actor Critic Algorithms

The variance of the estimate of ∇ψVπ(st) can be very high if Monte Carlo roll-outs are used, which can severely slow convergence. A potential solution to this problem is presented by using an explicit approximation of Vπ. Such an approximate value function is called a critic and the combined algorithm is called an actor-critic algorithm.

Actor critic algorithms typically use a temporal difference algorithm to update Vt, an estimate for Vπ. Assuming b(st)=Vπ(st) as a baseline, this leads to an unbiased estimate of δt∇ψlogπ(st,at,ψt) for the gradient of the policy. A typical actor-critic update would update the policy parameters with

ψt+1=ψt+βt(st)δt∇ψlogπ(st,at,ψt)

where δt=rt+1+γVt(st+1)−Vt(st) is an unbiased estimate of Qπ(st,at)−Vπ(st)
Cacla (continuous actor-critic learning-automation) is special AC algorithm using an error in action space rather than in parameter or policy space and it uses the sign of the temporal difference error rather than its size. During learning, it is assumed that there is exploration. As in many other actor-critic algorithms, if the temporal-difference error δt is positive, we judge at to be a good choice and we reinforce it. In Cacla, this is done by updating the output of the actor towards at . This is why exploration is necessary: without exploration the actor output is already equal to the action, and the parameters cannot be updated.

An update to the actor only occurs when the temporal difference error is positive. As an extreme case, consider an actor that already outputs the optimal action in each state for some deterministic MDP. For most exploring actions, the temporal-difference error is then negative. If the actor would be updated away from such an action, its output would almost certainly no longer be optimal.

This is an important difference between Cacla and policy-gradient methods: Cacla only updates its actor when actual improvements have been observed. This avoids slow learning when there are plateaus in the value space and the temporal difference errors are small. Intuitively, it makes sense that the distance to a promising action at is more important than the size of the improvement in value.

Here is a simple Cacla algorithm:

More details about actor critic algorithms? Refer to :

  1. van Hasselt, H.P.,Wiering, M.A.: Using continuous action spaces to solve discrete problems. In: Proceedings of the International Joint Conference on Neural Networks (IJCNN 2009), pp. 1149–1156 (2009).
  2. http://blog.csdn.net/philthinker/article/details/71104095
时间: 2024-09-14 08:39:24

Reinforcement Learning in Continuous State and Action Spaces: A Brief Note的相关文章

Deep Reinforcement Learning with a Natural Language Action Space

本文继续分享一篇深度增强学习在NLP中应用的paper,题目是Deep Reinforcement Learning with a Natural Language Action Space,作者是来自微软的Ji He博士,文章最早于2015年11月发在arxiv上,2016年6月8号update. 通过前两篇文章的介绍,基本对DQN在NLP中应用有了一个清晰的认识,与DQN之前应用不同的地方在于两个方面: 1.actions的量级很大. 2.transition tuple的具体形式随着模型来

(zhuan) Deep Reinforcement Learning Papers

  Deep Reinforcement Learning Papers   A list of recent papers regarding deep reinforcement learning. The papers are organized based on manually-defined bookmarks. They are sorted by time to see the recent papers first. Any suggestions and pull reque

(转) Deep Learning in a Nutshell: Reinforcement Learning

  Deep Learning in a Nutshell: Reinforcement Learning   Share: Posted on September 8, 2016 by Tim Dettmers No CommentsTagged Deep Learning, Deep Neural Networks, Machine Learning,Reinforcement Learning This post is Part 4 of the Deep Learning in a Nu

(转) Deep Reinforcement Learning: Pong from Pixels

Andrej Karpathy blog About Hacker's guide to Neural Networks Deep Reinforcement Learning: Pong from Pixels May 31, 2016 This is a long overdue blog post on Reinforcement Learning (RL). RL is hot! You may have noticed that computers can now automatica

18 Issues in Current Deep Reinforcement Learning from ZhiHu

  深度强化学习的18个关键问题   from: https://zhuanlan.zhihu.com/p/32153603     85 人赞了该文章 深度强化学习的问题在哪里?未来怎么走?哪些方面可以突破? 这两天我阅读了两篇篇猛文A Brief Survey of Deep Reinforcement Learning 和 Deep Reinforcement Learning: An Overview ,作者排山倒海的引用了200多篇文献,阐述强化学习未来的方向.原文归纳出深度强化学习中

论文笔记之:Asynchronous Methods for Deep Reinforcement Learning

Asynchronous Methods for Deep Reinforcement Learning ICML 2016   深度强化学习最近被人发现貌似不太稳定,有人提出很多改善的方法,这些方法有很多共同的 idea:一个 online 的 agent 碰到的观察到的数据序列是非静态的,然后就是,online的 RL 更新是强烈相关的.通过将 agent 的数据存储在一个 experience replay 单元中,数据可以从不同的时间步骤上,批处理或者随机采样.这种方法可以降低 non-

Generating Text with Deep Reinforcement Learning

上一篇介绍了DQN在文字游戏中的应用,本文将分享一篇DQN在文本生成中的应用,将一个领域的知识迁移到其他领域应用的时候,都需要做概念上的等效替换,比如context可以替换为state,被预测的word可以替换为action.本文分享的题目是Generating Text with Deep Reinforcement Learning,作者是来自National Research Council of Canada的Hongyu Guo研究员,文章最早于2015年10月30日submit在ar

Deep Reinforcement Learning for Dialogue Generation

本文将会分享一篇深度增强学习在bot中应用的文章,增强学习在很早的时候就应用于bot中来解决一些实际问题,最近几年开始流行深度增强学习,本文作者将其引入到最新的bot问题中.paper的题目是Deep Reinforcement Learning for Dialogue Generation,作者是Jiwei Li,最早于2016年6月10日发在arxiv上. 现在学术界中bot领域流行的解决方案是seq2seq,本文针对这种方案抛出两个问题: 1.用MLE作为目标函数会导致容易生成类似于"呵

论文笔记之:Dueling Network Architectures for Deep Reinforcement Learning

  Dueling Network Architectures for Deep Reinforcement Learning ICML 2016 Best Paper    摘要:本文的贡献点主要是在 DQN 网络结构上,将卷积神经网络提出的特征,分为两路走,即:the state value function 和 the state-dependent action advantage function.  这个设计的主要特色在于 generalize learning across act