NAS

  • [x] (ICLR2017, google brain)Neural architecture search with reinforcement learning
  • [ ] (2019.11)Meta-Learning of Neural Architectures for Few-Shot Learning,meta与NAS的结合:https://arxiv.org/abs/1911.11090v1
  • [x] (2019.01)Designing neural networks through neuroevolution,NE方法综述

  • (2017.09)Evolution Strategies as a Scalable Alternative to Reinforcement Learning https://openai.com/blog/evolution-strategies/https://arxiv.org/abs/1703.03864:NES方法与DQN、A3C相匹敌(但未完全脱离梯度)

  • (2018.04)Deep Neuroevolution: Genetic Algorithms Are a Competitive Alternative for Training Deep Neural Networks for Reinforcement Learning https://arxiv.org/abs/1712.06567:gradient-free的NE方法与DQN、A3C相匹敌

  • (2018.04)Simple random search provides a competitive approach to reinforcement learning https://arxiv.org/abs/1803.07055:简化NE方法(RS方法)与RPO、PPO、DDPG相匹敌

结合基于梯度的方法和神经进化

新一代进化算法

  • The Surprising Creativity of Digital Evolution: A Collection of Anecdotes from the Evolutionary Computation and Artificial Life Research Communities https://arxiv.org/abs/1803.03453 :综述

  • (NIPS 2018)Improving Exploration in Evolution Strategies for Deep Reinforcement Learning via a Population of Novelty-Seeking Agents https://arxiv.org/abs/1712.06560

  • (NIPS workshop 2018)Deep Curiosity Search: Intra-Life Exploration Can Improve Performance on Challenging Deep Reinforcement Learning Problems https://arxiv.org/abs/1806.00553

架构进化