What You'll Do
- Work on Veeva Link’s next-gen Data Platform
- Improve our current environment with features, refactoring, and innovation
- Work with JVM-based languages or Python on Spark-based data pipelines
- Operate ML models in close cooperation with our data science team
- Experiment in your domain to improve precision, recall, or cost savings
Requirements
- Expert skills in Java or Python
- Experience with Apache Spark or PySpark
- Experience writing software for the cloud (AWS or GCP)
- Speaking and writing in English enables you to take part in day-to-day conversations in the team and contribute to deep technical discussions
Nice to Have
- Experience with operating machine learning models (e.g., MLFlow)
- Experience with Data Lakes, Lakehouses, and Warehouses (e.g., DeltaLake, Redshift)
- DevOps skills, including terraform and general CI/CD experience
- Previously worked in agile environments
- Experience with expert systems
Perks & Benefits
- Comprehensive benefits package
- Fitness reimbursement
- Veeva Work-Anywhere
