OpenAI and AWS launched three new offerings on April 28: GPT-5.5 and other frontier models on Amazon Bedrock, Codex running through Bedrock infrastructure, and a new Managed Agents product for deploying multi-step AI agents inside AWS environments. All three are in limited preview.
OpenAI and Amazon Web Services announced on April 28, 2026, that they are deepening their strategic partnership by bringing three products to Amazon Bedrock simultaneously: OpenAI’s frontier models — headlined by GPT-5.5 — its Codex coding agent, and a new Managed Agents capability built for enterprise deployments. All three launched in limited preview on the same day.
The practical pitch is straightforward: companies already running workloads on AWS can now access OpenAI’s most capable models without setting up a separate vendor relationship, negotiating a new contract, or routing data through a different compliance framework. All customer data is processed through Amazon Bedrock, and eligible customers can count Codex usage toward their existing AWS cloud spending commitments.
What’s Actually Being Launched
The first offering puts GPT-5.5 — OpenAI’s most recently released model, which the company describes as its best frontier model — alongside other OpenAI models inside Bedrock’s unified API. Developers who already use Bedrock to call models from Anthropic, Meta, or Mistral can now add OpenAI models to the same workflow using the same AWS credentials and billing setup.
The second offering brings Codex, OpenAI’s coding agent, to Bedrock. More than 4 million people use Codex weekly, according to OpenAI, deploying it across tasks ranging from writing and refactoring code to generating tests, modernizing legacy systems, and producing documents and slide decks. On Bedrock, customers configure Codex to route its inference through Bedrock infrastructure rather than OpenAI’s own servers. Access starts with Codex CLI, the Codex desktop app and the Visual Studio Code extension.
The third offering is Amazon Bedrock Managed Agents, powered by OpenAI. This is a new product designed to help enterprises move multi-step AI agents from prototype to production. The service handles orchestration, tool use and governance natively, so engineering teams can focus on making agents useful for specific business processes rather than assembling the supporting infrastructure themselves.
Competitive Context: A Direct Challenge to Anthropic and GitHub Copilot
Amazon Bedrock already functions as a model marketplace hosting offerings from Anthropic, Meta, Mistral AI and Amazon’s own models. Adding OpenAI’s GPT-5.5 puts it in direct competition with Anthropic’s Claude, which has effectively served as Bedrock’s flagship premium option. One significant unknown at launch: OpenAI has not announced what Bedrock customers will pay to use its models through the platform. Bedrock pricing typically carries a 30–60% premium over direct API access for most model families — the notable exception being Claude, which AWS prices at parity with Anthropic’s own API. Whether OpenAI follows a similar parity approach or accepts a markup remains an open question.
On the coding-agent side, the more immediate competitive tension is with GitHub Copilot, which Microsoft owns. The two tools are built differently: Copilot is tightly integrated into GitHub’s repository and pull-request ecosystem, while Codex takes a more autonomous, agentic approach — it can plan, execute, test and iterate across multiple files in a sandboxed environment and verify its own output. That architecture gives Codex an edge on complex, multi-step engineering tasks. Copilot, however, holds a structural advantage for students specifically: verified students can access a free GitHub Copilot Student plan. Codex on Bedrock has no comparable free tier.
What This Means If You’re a Student or Early-Career Engineer
This announcement matters most to people who are already operating inside AWS environments — through a university cloud program, a summer internship, or a full-time role at a company with an AWS commitment. For those developers, Codex on Bedrock removes real friction: instead of managing a separate OpenAI subscription with its own billing and security review, they can access the same coding agent under the IAM credentials and compliance umbrella their employer already maintains.
Students working independently, however, should understand the cost reality clearly. AWS Bedrock has no dedicated free tier; every token generated costs money. This is not a tool designed for personal projects or classroom experimentation in the way that, say, a free Copilot student license is. The practical implication is that Codex on Bedrock is a workplace tool first.
That said, familiarity with how it works is increasingly worth building. Knowing how to authenticate via AWS credentials, call models through the Bedrock API, and configure the Codex VS Code extension are skills that will show up in job descriptions for enterprise software engineering roles. Students targeting those positions — particularly at large companies already standardized on AWS — will benefit from understanding this stack before they’re asked to use it on the job.
Source: OpenAI
Additional research sources
- https://aws.amazon.com/blogs/aws/top-announcements-of-the-whats-next-with-aws-2026/
- https://openai.com/index/openai-on-aws/
- https://openai.com/index/introducing-gpt-5-5/
- https://www.neowin.net/news/openais-frontier-ai-models-and-codex-now-available-on-amazon-bedrock/
- https://www.truefoundry.com/blog/aws-bedrock-pricing-explained-everything-you-need-to-know
- https://www.databricks.com/blog/openai-gpt-55-now-available-databricks-fully-governed-through-unity-ai-gateway
