TRM Labs provides blockchain analytics and AI solutions to help law enforcement and national security agencies, financial institutions, and cryptocurrency businesses detect, investigate, and disrupt crypto-related fraud and financial crime. TRM’s blockchain intelligence and AI platforms include solutions to trace the source and destination of funds, identify illicit activity, build cases, and construct an operating picture of threats. TRM is trusted by leading agencies and businesses worldwide who rely on TRM to enable a safer, more secure world for all.
TRM Labs provides blockchain analytics and AI solutions to help law enforcement and national security agencies, financial institutions, and cryptocurrency businesses detect, investigate, and disrupt crypto-related fraud and financial crime. TRM’s blockchain intelligence and AI platforms include solutions to trace the source and destination of funds, identify illicit activity, build cases, and construct an operating picture of threats. TRM is trusted by leading agencies and businesses worldwide who rely on TRM to enable a safer, more secure world for all.
At TRM, we’re on a mission to build a safer financial system for billions of people around the world. Our next-generation platform, which combines threat intelligence with machine learning, enables financial institutions and governments to detect cryptocurrency fraud and financial crime at an unprecedented scale.
As a Senior Software Engineer, ML Infrastructure at TRM Labs, you will collaborate with data scientists, engineers, and product managers to design and operate scalable GPU-backed infrastructure that powers TRM’s AI systems. You will work at the intersection of distributed systems, cloud infrastructure, GPU performance engineering, and applied machine learning — building the foundation that enables high-throughput, production-grade ML workloads.
Design and operate GPU cluster infrastructure.
Build and manage GPU-backed environments in cloud settings, including orchestration, autoscaling, resource isolation, and workload management across multiple concurrent models and users.
Optimize high-throughput inference.
Implement and tune serving systems that maximize token throughput, batching efficiency, GPU occupancy, and cost effectiveness across interactive and batch workloads.
Enable distributed inference strategies.
Support and operationalize model parallelism, tensor parallelism, and other distributed serving patterns for large-scale models.
Implement model optimization and compilation workflows.
Integrate and optimize acceleration stacks such as TensorRT, ONNX Runtime, vLLM, FlashAttention, and related tooling to improve performance and reduce inference cost.
Schedule heterogeneous workloads.
Design systems that manage multiple models, multiple users, and mixed workload types across heterogeneous accelerators (e.g., NVIDIA GPUs, Inferentia), ensuring predictable performance under varying demand.
Build observability into ML infrastructure.
Instrument systems to measure GPU load, memory utilization, batching efficiency, queue depth, and token throughput, and use data to continuously improve performance and reliability.
Partner across engineering teams.
Work closely with infrastructure, ML, and product teams to ensure models transition smoothly from experimentation to production-grade, highly available services.
We are building a safer world. That promise shows up in how we work every day.
TRM runs fast. Really fast. We’re a high‑velocity, high‑ownership team that expects clarity, follow‑through, and impact. People who thrive here are energized by hard problems, experimentation, and direct feedback. If something takes months elsewhere, it often ships here in days.
That pace isn’t for everyone. If you are optimizing primarily for consistent work-life balance, use the interview process to pressure-test fit. We want teammates who thrive here, not just survive here.
AI fluency is a baseline expectation at TRM.
We believe AI meaningfully changes how top performers operate. We expect every team member to use AI to accelerate and reimagine their craft, not just automate surface tasks.
At TRM, AI fluency means you are among the top 10 percent of operators in your function in how you apply AI to:
You will be evaluated on applied AI fluency during the interview process.
We hire and grow against three leadership principles. They’re the standards for how we operate, treat each other, and make decisions.
Learn more: Interviewing at TRM: How We Hire and What Success Looks Like
This work has real stakes. Depending on your role at TRM, your week might look like:
At TRM we care deeply about our craft. We are looking for individuals who want their work to matter, who experiment with speed and rigor, and who take pride in building a safer world for billions of people. If you’re excited by TRM’s mission but don’t check every box, we encourage you to apply — we hire for slope, judgment, and the will to learn fast.
TRM is a Series C company with $220M in total funding, backed by Blockchain Capital, Goldman Sachs, Bessemer, Y Combinator, Thoma Bravo, and others. Headquartered in San Francisco, TRM operates as a distributed-first company with hubs in Los Angeles, San Francisco, New York, Washington D.C., London, and Singapore.
TRM Labs does not accept unsolicited agency resumes. Please do not forward resumes to TRM employees. TRM Labs is not responsible for any fees related to unsolicited resumes and will not pay fees to any third-party agency or company without a signed agreement.
By submitting your application, you are agreeing to allow TRM to process your personal information in accordance with the TRM Privacy Policy
Learn More: Company Values | Interviewing | FAQs
Other similar jobs that might interest you