United States
Full time
Remote
EngineeringAI & Machine Learning
This is the target annual salary range for this role. This range is not inclusive of other additional compensation elements, such as our Bonus program, Equity program, Wellness allowance, and other benefits [US Only] (including medical, dental, vision and 401(k)).
The compensation range provided is influenced by various factors and represents the initial target range. Our salary offerings are dynamic and we strive to ensure that our base salary and total compensation package aligns and recognizes the top talent we aim to attract and retain. The compensation package of the successful candidate is based on various factors such as their skillset, experience, and job scope.
Our Krakenites are a world-class team with crypto conviction, united by our desire to discover and unlock the potential of crypto and blockchain technology.
What makes us different?
Kraken is a mission-focused company rooted in crypto values. As a Krakenite, you’ll join us on our mission to accelerate the global adoption of crypto, so that everyone can achieve financial freedom and inclusion. For over a decade, Kraken’s focus on our mission and crypto ethos has attracted many of the most talented crypto experts in the world.
Before you apply, please read the Kraken Culture page to learn more about our internal culture, values, and mission. We also expect candidates to familiarize themselves with the Kraken app. Learn how to create a Kraken account here.
As a fully remote company, we have Krakenites in 70+ countries who speak over 50 languages. Krakenites are industry pioneers who develop premium crypto products for experienced traders, institutions, and newcomers to the space. Kraken is committed to industry-leading security, crypto education, and world-class client support through our products like Kraken Pro, Desktop, Wallet, and Kraken Futures.
Become a Krakenite and build the future of crypto!
Kraken is building a dedicated AI Compute and Infrastructure team to power the next generation of model training, inference, evaluation, and experimentation across the exchange. This team sits within engineering leadership and owns the infrastructure layer that lets Kraken run AI workloads with control, speed, reliability, and cost discipline.
The team is responsible for GPU and accelerator infrastructure, cluster operations, scheduling, model serving, observability, capacity planning, and cost-efficient compute at scale. This is the backbone that allows Kraken to train, serve, evaluate, and iterate on AI systems in-house where it matters for privacy, latency, reliability, cost, or product differentiation.
You will join a small, senior, high-impact team working directly with AI/ML researchers, platform engineers, security teams, and product teams. The mandate is simple: make Kraken’s AI ambitions real by building compute infrastructure that is fast, dependable, efficient, and production-grade.
Own and operate GPU and accelerator clusters used for training, inference, evaluation, and experimentation, including drivers, runtimes, kernels, device plugins, node configuration, scheduling primitives, and workload isolation.
Design infrastructure that enables Kraken teams to run models locally on GPUs where it is strategically and economically preferable, reducing unnecessary dependency on external providers and containing compute costs.
Build and improve scheduling, orchestration, placement, quota management, and utilization systems across heterogeneous accelerator environments.
Optimize inference pipelines for latency, throughput, reliability, memory efficiency, and cost using frameworks such as vLLM, Triton Inference Server, TensorRT, or equivalent serving stacks.
Partner with ML engineers and researchers to remove bottlenecks in training, evaluation, batch inference, online inference, deployment, and production debugging workflows.
Build observability for GPU utilization, memory pressure, queue depth, saturation, token throughput, request latency, failed workloads, capacity pressure, and spend.
Drive reliability, incident response, alerting, runbooks, and post-incident improvements for always-on AI compute infrastructure.
Evaluate and integrate new hardware, cloud instance families, specialized accelerators, runtimes, schedulers, and serving frameworks as the AI infrastructure landscape evolves.
Build tooling that makes GPU usage visible, accountable, and easier for internal teams to consume without needing to become infrastructure experts.
Contribute to long-term architecture decisions that balance performance, cost efficiency, scalability, operational simplicity, and production safety.
5+ years of infrastructure engineering experience, with significant time spent on GPU compute, ML infrastructure, distributed systems, high-performance computing, or large-scale production platforms.
Hands-on experience operating GPU clusters or accelerator-backed infrastructure in production or production-like environments, including scheduling, orchestration, utilization monitoring, and cost optimization.
Strong systems engineering fundamentals across Linux, networking, storage, containers, Kubernetes, distributed runtimes, and production debugging.
Experience with ML serving frameworks such as vLLM, Triton Inference Server, TensorRT, TorchServe, KServe, Ray Serve, or equivalent systems.
Proficiency in Python for infrastructure automation, tooling, debugging, integration, and operational workflows.
Practical understanding of performance tradeoffs across batching, concurrency, memory usage, GPU utilization, model size, latency, throughput, availability, and cost.
Track record of optimizing compute costs while maintaining clear performance, reliability, and availability expectations.
Experience building observable systems with useful metrics, logs, traces, dashboards, alerts, and incident workflows.
Comfortable working in high-stakes, always-on environments where uptime, throughput, correctness, and operational discipline are critical.
Clear communicator who can translate infrastructure tradeoffs for researchers, product teams, platform engineers, security stakeholders, and engineering leadership.
Experience at a frontier AI lab, hyperscaler, high-frequency trading firm, research platform, or high-scale ML organization.
Familiarity with custom silicon or specialized accelerators such as TPUs, AWS Trainium, Gaudi, or similar platforms.
Background in capacity planning, procurement input, reserved capacity strategy, cloud accelerator economics, or GPU fleet cost management.
Experience with distributed training frameworks such as DeepSpeed, Megatron-LM, FSDP, Ray, or equivalent systems.
Experience debugging CUDA, NCCL, kernel, driver, runtime, memory, networking, or low-level performance issues.
Experience with Rust, C++, Go, CUDA, or other systems languages used for performance-critical infrastructure.
Crypto, financial services, trading infrastructure, or security-sensitive production infrastructure experience.
Unless a specific application deadline is stated in the job posting, applications are accepted on an ongoing basis.
Please note, applicants are permitted to redact or remove information on their resume that identifies age, date of birth, or dates of attendance at or graduation from an educational institution.
We consider qualified applicants with criminal histories for employment on our team, assessing candidates in a manner consistent with the requirements of the San Francisco Fair Chance Ordinance.
Kraken is powered by people from around the world and we celebrate all Krakenites for their diverse talents, backgrounds, contributions and unique perspectives. We hire strictly based on merit, meaning we seek out the candidates with the right abilities, knowledge, and skills considered the most suitable for the job. We encourage you to apply for roles where you don’t fully meet the listed requirements, especially if you’re passionate or knowledgable about crypto!
We may ask candidates to complete job-related skills or work-style assessments as part of our hiring process. These assessments are designed to evaluate competencies relevant to the role and are applied consistently across candidates for similar positions. Assessment results are considered alongside other relevant information, such as experience and interviews, and are not the sole basis for any employment decision.
As an equal opportunity employer, we don’t tolerate discrimination or harassment of any kind. Whether that’s based on race, ethnicity, age, gender identity, citizenship, religion, sexual orientation, disability, pregnancy, veteran status or any other protected characteristic as outlined by federal, state or local laws.
Stay in the know
Compensation Range: $127.2K – $254.4K
Other similar jobs that might interest you