Key Responsibilities
- Build and optimise scalable, maintainable, and high-performance data pipelines and workflows.
- Ingest, transform, and deliver high-volume data from a range of structured and unstructured sources, including MySQL databases and real-time Kafka streams.
- Design and maintain performant data models in Redshift, and optimise SQL queries to support analytics and reporting workloads.
- Contribute to our cloud migration journey and help evolve the data architecture with new ideas and improvements.
- Collaborate with cross-functional, globally distributed teams to translate business requirements into robust technical solutions.
- Embed data quality, reliability, observability, and security as first-class principles across the data platform.
- Support and enhance data solutions operating in a mission-critical payments environment.
- Continuously learn, share knowledge, and help raise standards across data governance, quality, and security practices.
Requirements
- 4-6 years of professional experience in Data Engineering or a related role.
- Strong SQL expertise, including query optimisation, complex transformations, and datamodelling.
- Solid Python skills for data engineering and pipeline development.
- Hands-on experience with AWS services such as Redshift, Lambda, Glue, and S3, as well as Airflow for orchestration.
- Familiarity with modern transformation and modelling practices using tools such as dbt.
- A collaborative mindset, strong problem-solving skills, critical thinking, and a willingness to take ownership and initiative.
- Experience contributing to large-scale cloud migration initiatives.
- Knowledge of real-time data streaming using Kafka.
- Exposure to CI/CD pipelines and infrastructure-as-code for data platforms.
- Interest in exploring AI/ML use cases within modern data ecosystems.
- Understanding of the payments domain or fintech data challenges.
