Platform Hardware Security
New York City, NY | Seattle, WA; San Francisco, CA | New York City, NY | Seattle, WA; Washington, DCFull-TimeMid-levelOther
About Anthropic
- Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
About the Role
- We're seeking a Platform Hardware Security Engineer to design and implement security architectures for bare-metal infrastructure. You'll work with teams across Anthropic to build firmware, bootloaders, operating systems, and attestation systems to ensure the integrity of our infrastructure from the ground up.
- This role requires expertise in low-level systems security and the ability to architect solutions that balance security requirements with the performance demands of training AI models across our massive fleet.
What you'll do
- Design and implement secure boot chains from firmware through OS initialization for diverse hardware platforms (CPUs, BMCs, switches, peripherals, and embedded microcontrollers)
- Architect attestation systems that provide cryptographic proof of system state from hardware root of trust through application layer
- Develop measured boot implementations and runtime integrity monitoring
- Create reference architectures and security requirements for bare-metal deployments
- Integrate security controls with infrastructure teams without impacting training performance
- Prototype and validate security mechanisms before production deployment
- Conduct firmware vulnerability assessments and penetration testing
- Build firmware analysis pipelines for continuous security monitoring
- Document security architectures and maintain threat models
- Collaborate with software and hardware vendors to ensure security capabilities meet our requirements
Who you are
- 8+ years of experience in systems security, with at least 5 years focused on firmware and hardware security (firmware, bootloaders, and OS-level security)
- Hands-on experience with secure boot, measured boot, and attestation technologies (TPM, Intel TXT, AMD SEV, ARM TrustZone)
- Strong understanding of cryptographic protocols and hardware security modules
- Experience with UEFI/BIOS or embedded firmware security, bootloader hardening, and chain of trust implementation
- Proficiency in low-level programming (C, Rust, Assembly) and systems programming
- Knowledge of firmware vulnerability assessment and threat modeling
- Track record of designing security architectures for complex, distributed systems
- Experience with supply chain security
- Ability to work effectively across hardware and software boundaries
- Knowledge of NIST firmware security guidelines and hardware security frameworks
Strong candidates may also have
- Experience with confidential computing technologies and hardware-based TEEs
- Knowledge of SLSA framework and software supply chain security standards
- Experience securing large-scale HPC or cloud infrastructure
- Contributions to open-source security projects (coreboot, CHIPSEC, etc.)
- Background in formal verification or security proof techniques
- Experience with silicon root of trust implementations
- Experience working with building foundational technical designs, operational leadership, and vendor collaboration
- Previous work with AI/ML infrastructure security
Deadline to apply: None. Applications will be reviewed on a rolling basis.
- The annual compensation range for this role is listed below.
- For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.
How we're different
- We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.
- The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
