The hardware and bandwidth for this mirror is donated by METANET, the Webhosting and Full Service-Cloud Provider.
If you wish to report a bug, or if you are interested in having us mirror your free-software or open-source project, please feel free to contact us at mirror[@]metanet.ch.

gym

Project Status: Active - The project has reached a stable, usable state and is being actively developed.

OpenAI Gym is a open-source Python toolkit for developing and comparing reinforcement learning algorithms. This R package is a wrapper for the OpenAI Gym API, and enables access to an ever-growing variety of environments.

Installation

You can install:

If you encounter a clear bug, please file a minimal reproducible example on github.

API

library(gym)

remote_base <- "http://127.0.0.1:5000"
client <- create_GymClient(remote_base)
print(client)

# Create environment
env_id <- "CartPole-v0"
instance_id <- env_create(client, env_id)
print(instance_id)

# List all environments
all_envs <- env_list_all(client)
print(all_envs)

# Set up agent
action_space_info <- env_action_space_info(client, instance_id)
print(action_space_info)
agent <- random_discrete_agent(action_space_info[["n"]])

# Run experiment, with monitor
outdir <- "/tmp/random-agent-results"
env_monitor_start(client, instance_id, outdir, force = TRUE, resume = FALSE)

episode_count <- 100
max_steps <- 200
reward <- 0
done <- FALSE

for (i in 1:episode_count) {
  ob <- env_reset(client, instance_id)
  for (i in 1:max_steps) {
    action <- env_action_space_sample(client, instance_id)
    results <- env_step(client, instance_id, action, render = TRUE)
    if (results[["done"]]) break
  }
}

# Dump result info to disk
env_monitor_close(client, instance_id)

People

License

License

These binaries (installable software) and packages are in development.
They may not be fully stable and should be used with caution. We make no claims about them.