ML APPLICATIONS

RESEARCH

TorchCraft 2: A fast, flexible, easy-to-use Python RL environment for StarCraft

2/7/2020

What it is:

TorchCraft 2 is a StarCraft reinforcement learning (RL) environment written in Python and exposed with the Gym API. Compared with previously available environments, TorchCraft 2 offers a range of features, performance improvements, and powerful tools for experimentation, which make cutting-edge RL research with StarCraft easier for both individuals and larger teams.

Compared with existing StarCraft environments, TorchCraft 2 is:

  • Flexible: For the first time, you can generate StarCraft environments programmatically in Python code.

  • Fast: Attains 2.8x the frame rate as PySC2 on a comparable scenario. TorchCraft 2 uses the StarCraft 1 engine, enabling it to run faster with fewer resource requirements than StarCraft 2-based environments.

  • Easy: Just run pip install to get started using StarCraft through the popular Gym API.

Use it for:

TorchCraft 2 can be used to:

  • Train and benchmark agents on a variety of StarCraft environments.

  • Programmatically generate game states. TorchCraft 2 (and our C++ environment TorchCraftAI) are the only StarCraft environments that allow you to generate game states without the use of StarCraft map editing software. This allows you to use automated curricula to generate environments that provide appropriate challenges to agents in training. This helps researchers ensure that their agents are trained on appropriately difficult challenges, improving the sample efficiency of the training process.

  • Measure an agent’s ability to generalize by testing performance on similar environments, or perturbing environments programmatically.

This table compares TorchCraft 2’s features with those of other RL environments.

Why it matters:

Real-time strategy games like StarCraft and Dota have stood as enduring challenges for measuring progress in AI research. With combinatorial action spaces, high-dimensional and virtually continuous state spaces, partial observability, complex dynamics, and long-term planning considerations, these games have resisted approaches that have achieved superhuman performance in games like Go and poker.

Recent RL approaches have achieved strong, human-level performance in complex games such as StarCraft, but they require extreme amounts of computational resources, which limits their applicability and can deter researchers in academic or hobbyist settings. Some successful approaches have relied on human expertise rather than learning entirely independently. TorchCraft 2 will enable the research community to explore and benchmark new RL approaches that are more efficient and free of heuristics.

Facebook AI has had a long-term focus on advancing RL research with StarCraft, with projects such as the original TorchCraft, largest data set of StarCraft 1 replays and research on multi-agent cooperation in StarCraft. With TorchCraft 2, we hope to accelerate the pace of research. Eventually, being able to train agents that thrive in environments as complex as real-time strategy games may help in dealing with more real-world forms of complexity.

Get it on GitHub:

https://github.com/torchcraft/TorchCraft


(StarCraft is a trademark or registered trademark of Blizzard Entertainment, Inc., in the U.S. and/or other countries. Nothing in this document should be construed as approval, endorsement, or sponsorship by Blizzard Entertainment, Inc.)

Written By

Dan Gant

Research Engineer

Daniel Haziza

Research Engineer

Zeming Lin

Research Engineer

Jonas Gehring

Software Engineer

Gabriel Synnaeve

Research Scientist