Loading...
27 May 2025

About Squads:

Since 2021, we’ve built products and APIs that unlock the benefits of modern money.

We began with Squads Protocol, the autonomous finance layer built on Solana. Secured by Solana’s permissionless ledger, Squads Protocol enables programmable payments, 24/7 USD liquidity, competitive yields and security enforced by deterministic code, not corporate promises.

The new financial stack is being built on blockchains, powered by tokenized assets, and designed for a world where value moves as freely as information. We build on Squads Protocol to bring this stack to every developer, business, and individual.

Stablecoins were once a niche experiment. Today, they’re one of the fastest-growing technologies in history. The question isn’t if stablecoins consume every aspect of finance, but when. We’re here to compress that timeline. Join us.

About the Role

Squads is seeking a highly driven and technically exceptional Data Engineer to own and evolve our data infrastructure as we scale. This is a foundational role where your work will directly impact how we prioritize product development, optimize security and compliance, and understand our customer behavior.

You’ll be responsible for designing and maintaining the data architecture, building robust pipelines, and collaborating across teams to uncover meaningful insights. If you thrive in a fast-paced environment, enjoy making sense of complex data, and are passionate about architecting solutions that drive decisions, this is your opportunity to have real impact at the core of a protocol-first company.

What you will be doing

  • Design, implement, and maintain scalable and secure data infrastructure
  • Select appropriate data storage technologies and implement custom solutions
  • Monitor and optimize data flow and processing performance across systems
  • Aggregate and store structured and unstructured data from internal systems, external APIs, and blockchain networks
  • Ensure data integrity and accuracy through cleaning, validation, and normalization
  • Manage connections to services such as Amazon S3, external databases, and data lakes
  • Build models to detect behavior patterns, usage trends, and market dynamics
  • Implement predictive analytics for risk forecasting and customer engagement
  • Automate data processing workflows using tools like Apache Airflow and Python-based ETL pipelines
  • Create reusable scripts and processes to streamline data ingestion and transformation
  • Support real-time data processing and alerting via systems like Kafka or Pub/Sub

Qualifications

  • 5+ years of experience in data engineering, backend engineering, or a related technical role
  • Deep understanding of data structures, algorithms, and large-scale data processing
  • Experience with serialized data formats (e.g., Protobuf, Avro, JSON, MessagePack)
  • Proficient in building pipelines using tools like Kafka, RabbitMQ, Pub/Sub
  • Strong knowledge of cloud platforms (AWS, GCP, Azure) and distributed systems
  • Hands-on with SQL and modern data warehouses (e.g., Clickhouse, Snowflake, BigQuery)
  • Fluency in Python, with working knowledge of Scala, Rust (bonus), or JavaScript
  • Familiar with ETL tools like Talend, Hevo, or custom script automation
  • Comfortable with basic data visualization using Excel, Looker, Tableau, or Google Analytics
  • Knowledge of machine learning workflows and data science collaboration practices
  • Experience working with blockchain data, especially on Solana

Nice to have

  • Familiarity with smart contracts, consensus mechanisms, and cryptographic methods
  • Contributions to open source projects
  • Proficiency in Rust
Employment Type
On-site

Related Jobs

Other similar jobs that might interest you