Loading...
26 March 2026

Location

Remote

Employment Type

Full time

Location Type

Remote

Department

Research

Building Open Superintelligence Infrastructure

Prime Intellect is building the open superintelligence stack — from frontier agentic models to the infrastructure that enables anyone to create, train, and deploy them. We aggregate and orchestrate global compute into a single control plane and pair it with the full RL post-training stack: environments, secure sandboxes, verifiable evals, and our async RL trainer. We enable researchers, startups, and enterprises to run end-to-end reinforcement learning at frontier scale, adapting models to real tools, workflows, and deployment contexts.

As a Research Engineer — RL Infrastructure, you’ll shape the core systems that power large-scale reinforcement learning: distributed training, environment orchestration, and the end-to-end pipeline from reward signal to deployed model. If you love building reliable, high-throughput systems at the frontier of RL, this role is for you.

Responsibilities

  • Design and build scalable RL training infrastructure — async trainers, environment orchestration, reward pipelines — across large GPU clusters.

  • Optimize performance, cost, and resource utilization of RL workloads using state-of-the-art compute and memory optimization techniques.

  • Contribute to our open-source libraries and frameworks for distributed RL training.

  • Publish research at top-tier venues (ICML, NeurIPS).

  • Write clear, approachable technical content distilling complex systems work for customers and the broader community.

  • Stay current with advances in RL systems, distributed training, and ML infrastructure, and proactively identify opportunities to enhance our platform.

Requirements

  • Strong background in ML engineering, with hands-on experience building and scaling RL or large model training pipelines end-to-end.

  • Deep expertise in distributed training techniques and frameworks (e.g., PyTorch Distributed, DeepSpeed, vLLM, Ray) including data, tensor, and pipeline parallelism.

  • Experience with RL-specific infrastructure: environment management, rollout workers, reward model serving, or online/async training loops.

  • Solid understanding of MLOps best practices — experiment tracking, model versioning, CI/CD.

  • Passion for advancing open, scalable RL infrastructure and democratizing access to frontier AI capabilities.

  • If you’re not familiar with all of the above but feel you can contribute to our mission and you’re a high-energy person, get familiar with these resources (here, here, and here) and please reach out!

Benefits & Perks

  • Competitive compensation including equity, aligning your success with Prime Intellect’s growth and impact.

  • Flexible work arrangements — remote or in-person at our San Francisco office.

  • Visa sponsorship and relocation assistance for international candidates.

  • Quarterly team offsites, hackathons, conferences, and learning opportunities.

  • A talented, hard-working, mission-driven team united by a shared passion for accelerating AI research.

We recently raised $15M led by Founders Fund (total $20M+), with participation from Menlo Ventures and prominent angels including Andrej Karpathy, Tri Dao, Dylan Patel, Clem Delangue, Emad Mostaque, and others.

If you’re excited about building the infrastructure layer for the future of reinforcement learning at scale, we’d love to hear from you.

Employment Type
On-site
Prime Intellect
View profile

Related Jobs

Other similar jobs that might interest you