About Anthropic
- Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
About the role
- We're building the infrastructure that enables Claude to act in the world—booking travel, writing code, calling APIs, managing files, and completing multi-step tasks autonomously. This is foundational work for the next generation of AI capabilities.
- The Agent Infrastructure team designs and operates the execution environments, state management systems, and security boundaries that make autonomous AI agents safe and reliable. You'll work at the intersection of distributed systems, security engineering, and product—building systems that don't exist anywhere else in industry.
- This is a high-priority initiative. The problems are hard, the scope is greenfield, and the impact is immediate.
What you'll do
- Design and build sandboxed compute environments where Claude can safely execute code, access tools, and interact with external services
- Build state management systems for long-running agent tasks—handling checkpoints, recovery, and resumption across failures
- Develop authentication and authorization frameworks for delegated access—enabling Claude to act on behalf of users securely
- Create observability and debugging tools for agent execution—understanding what Claude did, why, and how to make it better
- Partner closely with product and research teams to define what's possible and ship it
You may be a good fit if you
- Have 6+ years of experience building distributed systems, infrastructure, or platform services at the hyper scale
- Comfortable building Cloud Native infrastructure on GCP, AWS, or Azure
- Care deeply about security, isolation, and building systems that fail safely
- Have experience with containers, sandboxing, or secure execution environments (e.g., gVisor, Firecracker, V8 isolates)
- Are comfortable with ambiguity—this is greenfield work, and you'll help define the architecture
- Write clean, maintainable code in Python, Go, Rust, or similar
- Want to work on problems that don't have existing playbooks
Strong candidates may have
- Experience building multi-tenant execution platforms or serverless infrastructure
- Background in security engineering, sandboxing, or isolation technologies
- Familiarity with workflow orchestration systems (Temporal, Airflow, Step Functions)
- Experience with state machines, checkpointing, or durable execution patterns
- Low-level systems experience (Linux internals, eBPF, container runtimes)
- The annual compensation range for this role is listed below.
- For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.
How we're different
- We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.
- The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
