Loading...
4 December 2025
  • Contribute to data engineering initiatives within an experienced data organization
  • Explore and analyze connected datasets to identify patterns and develop high-quality data models
  • Partner with the data engineering group to refine and optimize transformation workflows
  • Design, build, and operationalize large-scale data solutions using AWS services and third-party technologies (Spark, EMR, DynamoDB, Redshift, Kinesis, Lambda, Glue, Snowflake)
  • Build production data pipelines covering ingestion through consumption using SQL and Python
  • Implement data engineering, ingestion, and curation functions on AWS using native or custom tooling
  • Lead proofs of concept and guide the transition of validated solutions into scalable production environments across engineering, deployment, and commercialization
  • Collaborate with analytics teams using Looker, QuickSight, and Q to provide clean and reliable datasets
  • 2+ years of data engineering experience
  • Experience with Python for data engineering work involving ETL/ELT pipelines and related components
  • Proficiency with SQL, Python and other data-focused languages
  • Ability to design scalable solutions, evaluate emerging data technologies and anticipate new trends to address complex challenges
  • Strong communication skills in both spoken and written English
  • Startup experience
  • Familiarity with Snowflake
  • Familiarity with AWS
  • Experience with DBT, Dagster, Apache Iceberg or Infrastructure as Code
  • Knowledge of scalable data lake and streaming patterns
  • Bachelor’s Degree in Computer Engineering, Computer Science, or equivalent

Employment Type
Remote

Related Jobs

Other similar jobs that might interest you