Research Engineer, Interpretability
San Francisco, CAFull-TimeMid-levelSoftware Engineering
About Anthropic
- Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
About the role
- When you see what modern language models are capable of, do you wonder, "How do these things work? How can we trust them?"
- The Interpretability team at Anthropic is working to reverse-engineer how trained models work because we believe that a mechanistic understanding is the most robust way to make advanced systems safe.
- Think of us as doing "neuroscience" of neural networks using "microscopes" we build - or reverse-engineering neural networks like binary programs.
- More resources to learn about our work:
- Our research blog - covering advances including Monosemantic Features and Circuits
- An Introduction to Interpretability from our research lead, Chris Olah
- The Urgency of Interpretability from CEO Dario Amodei
- Engineering Challenges Scaling Interpretability - directly relevant to this role
- 60 Minutes segment - Around 8:07, see a demo of tooling our team built
- New Yorker article - what it's like to work on one of AI's hardest open problems
- Even if you haven’t worked on interpretability before, the infrastructure expertise is similar to what's needed across the lifecycle of a production language model:
- Pretraining: Training dictionary learning models looks a lot like model pretraining - creating stable, performant training jobs for massively parameterized models across thousands of chips
- Inference: Interp runs a customized inference stack. Day-to-day analysis requires services that allow editing a model's internal activations mid-forward-pass - for example, adding a "steering vector"
- Performance: Like all LLM work, we push up against the limits of hardware and software. Rather than squeezing the last 0.1%, we are focused on finding bottlenecks, fixing them and moving ahead given rapidly evolving research and safety mission
- The science keeps scaling - and it's now applied directly in safety audits on frontier models, with real deadlines. As our research has matured, engineering and infrastructure have become a bottleneck. Your work will have a direct impact on one of the most important open problems in AI.
Responsibilities
- Build and maintain the specialized inference and training infrastructure that powers interpretability research - including instrumented forward/backward passes, activation extraction, and steering vector application
- Resolve scaling and efficiency bottlenecks through profiling, optimization, and close collaboration with peer infrastructure teams
- Design tools, abstractions, and platforms that enable researchers to rapidly experiment without hitting engineering barriers
- Help bring interpretability research into production safety audits - with real deadlines and high reliability expectations
- Work across the stack - from model internals and accelerator-level optimization to user-facing research tooling
You may be a good fit if you
- Have 5-10+ years of experience building software
- Are highly proficient in at least one programming language (e.g., Python, Rust, Go, Java) and productive with Python
- Are extremely curious about unfamiliar domains; can quickly learn and put that knowledge to work, e.g. diving into new layers of the stack to find bottlenecks
- Have a strong ability to prioritize the most impactful work and are comfortable operating with ambiguity and questioning assumptions
- Prefer fast-moving collaborative projects to extensive solo efforts
- Are curious about interpretability research and its role in AI safety (though no research experience is required!)
- Care about the societal impacts and ethics of your work
- Are comfortable working closely with researchers, translating research needs into engineering solutions.
Strong candidates may also have experience with
- Optimizing the performance of large-scale distributed systems
- Language modeling fundamentals with transformers
- High Performance LLM optimization: memory management, compute efficiency, parallelism strategies, inference throughput optimization
- Working hands-on in a mainstream ML stack - PyTorch/CUDA on GPUs or JAX/XLA on TPUs
- Collaborating closely with researchers and building tooling to support research teams; or directly performed research with complex engineering challenges
Representative Projects
- Building Garcon, a tool that allows researchers to easily instrument LLMs to extract internal activations
- Designing and optimizing a pipeline to efficiently collect petabytes of transformer activations and shuffle them
- Profiling and optimizing ML training jobs, including multi-GPU parallelism and memory optimization
- Building a steered inference system that applies targeted interventions to model internals at scale (conceptually similar to Golden Gate Claude but for safety research)
Role Specific Location Policy
- This role is based in the San Francisco office; however, we are open to considering exceptional candidates for remote work on a case-by-case basis.
- The annual compensation range for this role is listed below.
- For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.
How we're different
- We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.
- The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
