Remote
Full time
Remote
Engineering
OXIO is the world’s first telecom-as-a-service (TaaS) platform. We are democratizing telecom and making it easily accessible for brands and enterprises to fully own and operate proprietary mobile networks designed to support their own customers needs. Our TaaS solution combines multiple existing networks into one single platform that can be seamlessly managed in the cloud as a modern SaaS offering. And it gets better – with full network access comes unparalleled business intelligence and insights to help enterprises better understand customer and machine (M2M) behavior. With a continuous focus on innovation, any company can build a powerful telecom presence with OXIO, and in addition help them glean unique customer insights like never before.
Job Description:
We’re seeking a seasoned Staff Data Engineer to lead the design, development, and scaling of our modern data platform. This role is ideal for someone who thrives in building and designing robust data systems. You’ll be instrumental in shaping our data infrastructure, driving governance, and building scalable APIs that power real-time and batch analytics.
You will also play a pivotal role in evaluating, designing, and migrating our organization toward a North Star data architecture—a future-proof foundation that supports scalable, secure, and intelligent data operations across the enterprise.
This role supports a wide range of analytics use cases across telecom networking, product intelligence, financial reporting, and internal/external data insights. You’ll help build a comprehensive Customer 360 platform powered by ML models and behavioral data, enabling advanced use cases such as fraud detection, brand intelligence, and personalized customer engagement.
Key Responsibilities
Help build, maintain, and scale our data pipelines that bring together data from various internal and external systems into our data warehouse.
Partner with internal stakeholders to understand analysis needs and consumption patterns.
Partner with upstream engineering teams to enhance data logging patterns and best practices.
Participate in architectural decisions and help us plan for the company’s data needs as we scale.
Adopt and evangelize data engineering best practices for data processing, modeling, and lake/warehouse development.
Advise engineers and other cross-functional partners on how to most efficiently use our data tools.
Develop data solutions through hands-on coding.
Required Qualifications:
15+ years experience as a data engineer and/or analytics engineer building large scale data platforms and scalable data warehouses.
10+ years of hands-on experience coding in Python, Spark, SQL for building and maintaining data pipelines.
5+ years of experience working with AWS cloud, dbt and Snowflake, Databricks, BigQuery, Redshift or other data warehouses
Proficient with Dimensional Modeling (Star Schema, Kimball, Inmon) and Data Architecture concepts.
Advanced SQL skills (ease with window functions, defining UDFs)
Experience in implementing real-time and batch data pipelines with strict SLOs, and optimizing data storage and access patterns.
Proven track record of enhancing data reliability, discoverability, and observability.
Hands-on experience with data visualization and BI tools (Tableau, PowerBI, Cibe.dev)
Strong grasp of data governance, compliance frameworks, and secure data sharing
Has good understanding of working with storage layers like Hudi, Delta Lake or Iceberg
Demonstrated success in leading large-scale projects across teams and mentoring others in Data Engineering best practices.
Experience building operational tools and defining best practices to improve operational efficiency and developer experience—including evaluating and adopting AI-based tools for engineering productivity.
Nice To Haves:
Experience building streaming applications or pipelines using async messaging services or distributed streaming platforms like Apache Kafka
Knowledge of Airflow or some other orchestration tool
hands-on experience with event-driven architecture and streaming data processing frameworks like Kafka, Spark, Flink
Experiences with time-series databases like Clickhouse, InfluxDB
Familiarity with infrastructure tooling such as Terraform/Pulumi and worked with Kubernetes
What We Offer:
Competitive salary and stock option incentive program
Company paid healthcare
Flexible work arrangements
Company sponsored team-lunches and company retreats
International organization that enables you to work across boundaries, travel to different locations, and enjoy the dynamics of a rapidly growing startup
A diverse and inclusive team.
We welcome applicants from all backgrounds to apply regardless of race, ethnicity, age, disability status or other defining characteristics.
OXIO is committed to fostering a diverse, equitable, and inclusive workplace where all employees feel valued and empowered. We believe that diversity in all its forms drives innovation and creativity, and we strive to create an environment that reflects and celebrates the unique backgrounds, perspectives, and talents of our employees. We do not discriminate against any employee or applicant for employment based on race, color, religion, sex (including pregnancy, gender identity, and sexual orientation), national origin, age, disability, genetic information, veteran status, or any other characteristic protected by applicable federal, state, or local law. We are dedicated to providing a work environment free from discrimination and harassment. We committed to ensuring that all qualified applicants have an equal opportunity to apply for job openings. If you need assistance or an accommodation due to a disability, please contact us at hr@oxio.io for more information.
Other similar jobs that might interest you