We build the most advanced environments for training and evaluating AI agents on long-horizon, multi-tool tasks in any domain.
- Applications & tools: the tools agents interact with (e.g. slack, email, web, excel, github, linear, etc)
- Data: information seeded into the environment which represents the initial state
- Tasks: descriptions for what the agent should execute and accomplish
- Verifiers: rubrics that evaluate how well agents perform on tasks in the environment
- Agent(s): AI actor(s) that navigate the environment and complete tasks using the available tools
Introducing Systems-Bench
long-horizon, multi-tool SWE agent benchmark