Staff + Senior Software Engineer, Cloud Inference

San Francisco, CA | New York City, NY | Seattle, WAFull-TimeStaffSoftware Engineering

You will be redirected to the company career page

About Anthropic

  • Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.

About the Role

  • The Cloud Inference team scales and optimizes Claude to serve the massive audiences of developers and enterprise companies across AWS, GCP, Azure, and future cloud service providers (CSPs). We own the end-to-end product of Claude on each cloud platform—from API integration and intelligent request routing to inference execution, capacity management, and day-to-day operations.
  • Our engineers are extremely high leverage: we simultaneously drive multiple major revenue streams while optimizing one of Anthropic's most precious resources—compute. As we expand to more cloud platforms, the complexity of managing inference efficiently across providers with different hardware, networking stacks, and operational models grows significantly. We need engineers who can navigate these platform differences, build robust abstractions that work across providers, and make smart infrastructure decisions that keep us cost-effective at massive scale.
  • Your work will increase the scale at which our services operate, accelerate our ability to reliably launch new frontier models and innovative features to customers across all platforms, and ensure our LLMs meet rigorous safety, performance, and security standards.

What You'll Do

  • Design and build infrastructure that serves Claude across multiple CSPs, accounting for differences in compute hardware, networking, APIs, and operational models
  • Collaborate with CSP partner engineering teams to resolve operational issues, influence provider roadmaps, and stand up end-to-end serving on new cloud platforms
  • Design and evolve CI/CD automation systems, including validation and deployment pipelines, that reliably ship new model versions to millions of users across cloud platforms without regressions
  • Design interfaces and tooling abstractions across CSPs that enable cost-effective inference management, scale across providers, and reduce per-platform complexity
  • Contribute to capacity planning and autoscaling strategies that dynamically match supply with demand across CSP validation and production workloads
  • Optimize inference cost and performance across providers—designing workload placement and routing systems that direct requests to the most cost-effective accelerator and region
  • Contribute to inference features that must work consistently across all platforms
  • Analyze observability data across providers to identify performance bottlenecks, cost anomalies, and regressions, and drive remediation based on real-world production workloads

You May Be a Good Fit If You

  • Have significant software engineering experience, with a strong background in high-performance, large-scale distributed systems serving millions of users
  • Have experience building or operating services on at least one major cloud platform (AWS, GCP, or Azure), with exposure to Kubernetes, Infrastructure as Code or container orchestration
  • Have strong interest in inference
  • Thrive in cross-functional collaboration with both internal teams and external partners
  • Are a fast learner who can quickly ramp up on new technologies, hardware platforms, and provider ecosystems
  • Are highly autonomous and self-driven, taking ownership of problems end-to-end with a bias toward flexibility and high-impact work
  • Pick up slack, even when it goes outside your job description

Strong Candidates May Also Have Experience With

  • Direct experience working with CSP partner teams to scale infrastructure or products across multiple platforms, navigating differences in networking, security, privacy, billing, and managed service offerings
  • A background in building platform-agnostic tooling or abstraction layers that work across cloud providers
  • Hands-on experience with capacity management, cost optimization, or resource planning at scale across heterogeneous environments
  • Strong familiarity with LLM inference optimization, batching, caching, and serving strategies
  • Experience with Machine learning infrastructure including GPUs, TPUs, Trainium, or other AI accelerators
  • Background designing and building CI/CD systems that automate deployment and validation across cloud environments
  • Solid understanding of multi-region deployments, geographic routing, and global traffic management
  • Proficiency in Python or Rust
  • The annual compensation range for this role is listed below.
  • For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.

How we're different

  • We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.
  • The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.

Job Summary

CompanyAnthropic
LocationSan Francisco, CA | New York City, NY | Seattle, WA
TypeFull-Time
LevelStaff
DomainSoftware Engineering