Data Engineer II

Boston, MAFull-TimeMid-levelSoftware Engineering

You will be redirected to the company career page

RESPONSIBILITIES:

  • Design, build, and operate scalable ELT pipelines using Python and PySpark, with a focus on reliability, performance, and maintainability
  • Own and improve batch and streaming data systems using Spark and Kafka, including monitoring and resolving production data issues
  • Develop and optimize Snowflake data models and DBT transformations to support analytics, experimentation, and trusted metrics
  • Partner with data scientists, analysts, and product teams to translate business requirements into well-designed data solutions
  • Contribute to the evolution of the data platform by improving observability, data quality, and engineering best practices
  • Leverage AI tools to accelerate development, improve code quality, and automate repetitive data engineering workflows

QUALIFICATIONS:

  • Bachelor’s degree in Computer Science, Engineering, or a related field, or equivalent practical experience
  • 3-5 years of professional experience building and operating ETL/ELT pipelines in production environments
  • Strong proficiency in SQL and hands-on experience with modern data warehousing concepts and dimensional modeling
  • Professional experience using Python for data engineering, including writing clean, testable, and reusable code
  • Experience with DBT for data modeling, testing, and documentation is preferred
  • Experience with Spark and Kafka for batch or streaming data processing is preferred
  • Strong problem-solving skills, clear communication, and the ability to work independently while collaborating in an agile environment
  • Comfort using AI tools such as Copilot or ChatGPT to improve efficiency throughout the software development lifecycle

Job Summary

CompanyWhoop
LocationBoston, MA
TypeFull-Time
LevelMid-level
DomainSoftware Engineering