Location: San Francisco
Employment Type: Full time
Department: Operations
Prime Intellect is building the open superintelligence stack - from frontier agentic models to the infra that enables anyone to create, train, and deploy them. We aggregate and orchestrate global compute into a single control plane and pair it with the full RL post-training stack: environments, secure sandboxes, verifiable evals, and our async RL trainer. We enable researchers, startups and enterprises to run end-to-end reinforcement learning at frontier scale, adapting models to real tools, workflows, and deployment contexts.
We recently raised $15mm in funding (total of $20mm raised) led by Founders Fund, with participation from Menlo Ventures and prominent angels including Andrej Karpathy (Eureka AI, Tesla, OpenAI), Tri Dao (Together AI), Dylan Patel (SemiAnalysis), Clem Delangue (Huggingface), Emad Mostaque (Stability AI) and many others.
You will own revenue operations at Prime Intellect — the systems, processes, and customer coordination that turn complex, usage-based AI deployments into accurate billing, predictable forecasting, and durable customer relationships.
This is not a traditional RevOps role. Our product is GPU compute and RL training infrastructure. You need to be as comfortable reading Grafana dashboards and interpreting cluster utilization metrics as you are reconciling invoices and building pipeline reports. You will work directly with customers on billing questions, usage transparency, and operational issues — and you'll build the internal systems that make all of this scale.
You'll be the connective tissue between Sales, Engineering, Finance, and customers. If a customer's training run hits a capacity issue at 2am and it affects their bill, you're the person who understands both the technical context and the commercial implications.
Own the revenue data model: pipeline, contracts, compute usage, credit balances, and revenue recognition
Build and maintain billing infrastructure for usage-based and hybrid pricing (committed capacity, credit rollover, burst pricing)
Ensure invoicing accuracy by reconciling platform telemetry with contractual terms
Evaluate and evolve the GTM tech stack (CRM, billing, data warehouse) as we scale
Monitor compute telemetry, Grafana dashboards, and observability platforms to understand customer workload patterns and flag anomalies
Translate infrastructure events (downtime, capacity changes, migration) into commercial impact and customer communications
Work with Engineering to connect platform signals directly to billing and expansion workflows
Serve as a direct point of contact for customers on billing, usage questions, SLA tracking, and operational issues
Manage contract amendments, renewal workflows, and expansion motions
Build the handoff protocols between Sales, Solutions Engineering, and Finance so nothing falls through the cracks
Build forecasting models that incorporate usage patterns, committed capacity, and expansion signals
Standardize reporting: pipeline health, compute GMV, margin by product line
Partner with Finance on revenue recognition, board reporting, and planning
4–7 years in Revenue Operations, Sales Operations, or Customer Operations at B2B infrastructure, cloud, or AI companies
Comfort with technical systems: you can navigate Grafana, read logs, understand compute utilization metrics, and talk to engineers without a translator
Direct experience with usage-based or consumption-based billing and pricing
High ownership — you see gaps and build the fix before anyone asks
Comfortable working directly with customers, not just internal stakeholders
AI-native in how you work: you use LLMs, automation, and programmatic tools to move faster and build systems that don't require manual upkeep
Bonus:
Experience at an AI infrastructure company, cloud provider, or compute marketplace
Familiarity with GPU economics, training job costing, or inference pricing
You've been the first RevOps hire and built the function from scratch
Competitive compensation + meaningful equity
Flexible work (remote or San Francisco)
Visa sponsorship and relocation support
Professional development budget
Team off-sites and conferences
A front-row seat to building the infrastructure layer for open AI
You’ll join a mission-driven team working at the frontier of open, superintelligence infra. In this role, you’ll have the opportunity to:
Shape the evolution of agent-driven solutions—from research breakthroughs to production systems used by real customers.
Collaborate with leading researchers, engineers, and partners pushing the boundaries of RL and post-training.
Grow with a fast-moving organization where your contributions directly influence both the technical direction and the broader AI ecosystem.