Skip to content
/ PPO Public

PPO implementation for OpenAI gym environment based on Unity ML Agents

Notifications You must be signed in to change notification settings

EmbersArc/PPO

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

15 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

PPO

PPO implementation for OpenAI gym environment based on Unity ML Agents: https://github.com/Unity-Technologies/ml-agents

Notable changes include:

  • Ability to continuously display progress with non-stochastic policy during training
  • Works with OpenAI environments
  • Option to record episodes
  • State normalization for given number of frames
  • Frame skip
  • Faster reward discounting etc.

About

PPO implementation for OpenAI gym environment based on Unity ML Agents

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages