OpenAI Releases Symphony, an Open-Source Coding Agent Orchestrator

OpenAI has open-sourced Symphony, an agent orchestrator that hooks into issue trackers like Linear so coding agents pull and complete tasks autonomously. It’s the company’s answer to a bottleneck it hit internally: human engineers running out of attention, not agents running out of capability.

OpenAI published Symphony on April 27, a lightweight orchestration system that wires coding agents directly into a project-management board so every open ticket automatically gets an agent working on it. The specification is free and licensed under Apache 2.0; a reference implementation written in Elixir is also available. The announcement was authored by OpenAI engineers Alex Kotliarskyi, Victor Zhu and Zach Brock.

The system grew out of a frustration the team documented in an earlier post on what they call harness engineering. Over the past several months, OpenAI engineers built and shipped a roughly one-million-line beta product without any manually written code — every line was generated by Codex. That worked, but it created a new ceiling: each engineer could comfortably supervise only three to five simultaneous Codex sessions before context-switching became the real drag on productivity. The agents were fast; human attention was the bottleneck.

“We had effectively built a team of extremely capable junior engineers, then assigned our human engineers to micromanaging them. That wasn’t going to scale.” — OpenAI

Symphony addresses this by moving supervision up a level. Instead of managing Codex sessions in browser tabs or terminals, engineers manage work in their issue tracker. Each open Linear issue maps to a dedicated agent workspace. The orchestrator watches the board continuously, restarts any agent that stalls or crashes, and picks up new tasks as they appear. Work executes as a directed acyclic graph — agents only start on tasks whose dependencies are already resolved, so parallel work unfolds in the most efficient order without manual scheduling.

The results internally were significant. Among some teams at OpenAI, the number of landed pull requests rose 500 percent in the first three weeks after adopting Symphony. The announcement also notes a spillover effect for non-engineers: the team’s product manager and designer began filing feature requests directly into the board and receiving back review packets that included video walkthroughs of the working feature inside the live product — no repository access or Codex session required.

A Crowded Space With a Different Philosophy

Symphony arrives in a market that has moved fast. GitHub Copilot now has a coding agent for Jira. Anthropic offers Claude Code. Google Cloud has extended its Agent Development Kit to Linear, Jira, Asana, and Notion. And Cognition’s Devin, widely seen as the pioneer of ticket-to-PR autonomous agents, charges $500 a month per seat plus usage fees. In February 2026, Grok Build, Windsurf, Claude Code Agent Teams, Codex CLI, and Devin all shipped parallel multi-agent features within the same two-week window, signaling that running agents in parallel is no longer a differentiator — it’s expected.

What OpenAI is claiming as Symphony’s edge is a higher level of abstraction. Competitors largely manage sessions; Symphony manages work. It runs as a standalone service rather than a locked-in IDE feature, and its Elixir foundation gives it process isolation and fault tolerance suited to long-running, parallel workflows. The team notes the language choice itself is a signal:

“[W]hen code is effectively free, you can finally pick languages for their strengths, like Elixir’s concurrency.” — OpenAI

There is a meaningful asterisk, though. The specification is open and technically model-agnostic — OpenAI explicitly encourages developers to point any coding agent at the SPEC.md file and generate a custom implementation in any language. But the out-of-the-box reference implementation invokes Codex through OpenAI’s APIs and App Server. Teams that want to swap in a different agent will need to do that integration work themselves. OpenAI has also been explicit that it does not plan to maintain Symphony as a product; it is a reference implementation designed to be studied, forked, or rebuilt, not a supported tool with a roadmap. Community reaction has been mixed, with some developers on Hacker News calling it a natural evolution of Codex usage and others describing the specification documents as difficult to parse in practice.

What This Means If You’re Early in Your Career

Symphony is not a weekend shortcut. It works best in codebases that have already invested in automated tests, documentation, and the harness engineering practices OpenAI described in its earlier post. That upfront cost is real, and students dropping it into a fresh side project should expect to do foundational infrastructure work first.

That said, the mental model Symphony embodies is worth internalizing now. The shift the announcement describes — from managing coding agents to managing the work that needs to get done — is the direction that engineering roles are moving at companies already running agents at scale. The orchestrator who defines tasks clearly, structures dependencies thoughtfully, and reviews outputs critically is increasingly where leverage lives. Symphony gives students a concrete, industry-validated pattern to study and experiment with before that expectation becomes universal.

The specification is free, the license is permissive, and OpenAI’s own documentation invites developers to use any agent they prefer. For students building capstone projects or open-source work, that is a low-cost way to get hands-on experience with the workflow patterns that are rapidly becoming professional table stakes.

Source: OpenAI

Additional research sources