Privacy Research Engineer, Safeguards

San Francisco, CAFull-TimeMid-levelSoftware Engineering

You will be redirected to the company career page

About Anthropic

  • Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.

About the Role

  • We are looking for researchers to help mitigate the risks that come with building AI systems. One of these risks is the potential for models to interact with private user data. In this role, you'll design and implement privacy-preserving techniques, audit our current techniques, and set the direction for how Anthropic handles privacy more broadly.

Responsibilities:

  • Lead our privacy analysis of frontier models, carefully auditing the use of data and ensuring safety throughout the process
  • Develop privacy-first training algorithms and techniques
  • Develop evaluation and auditing techniques to measure the privacy of training algorithms
  • Work with a small, senior team of engineers and researchers to enact a forward-looking privacy policy
  • Advocate on behalf of our users to ensure responsible handling of all data

You may be a good fit if you have

  • Experience working on privacy-preserving machine learning
  • A track record of shipping products and features inside a fast-moving environment
  • Strong coding skills in Python and familiarity with ML frameworks like PyTorch or JAX.
  • Deep familiarity with large language models, how they work, and how they are trained
  • Have experience working with privacy-preserving techniques (e.g., differential privacy and how it is different from k-anonymity, l-diversity, and t-closeness)
  • Experience supporting fast-paced startup engineering teams
  • Demonstrated success in bringing clarity and ownership to ambiguous technical problems
  • Proven ability to lead cross-functional security initiatives and navigate complex organizational dynamics

Strong candidates may also

  • Have published papers on the topic of privacy-preserving ML at top academic venues
  • Prior experience training large language models (e.g., collecting training datasets, pre-training models, post-training models via fine-tuning and RL, running evaluations on trained models)
  • Prior experience developing tooling to support privacy-preserving ML (e.g., differential privacy in TF-Privacy or Opacus)
  • The annual compensation range for this role is listed below.
  • For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.

How we're different

  • We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.
  • The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.

Job Summary

CompanyAnthropic
LocationSan Francisco, CA
TypeFull-Time
LevelMid-level
DomainSoftware Engineering