We’re hiring a Founding ML Research Engineer to build the pre-training and post-training infrastructure for training some of the largest speech models in the world. You’ll own the training stack end-to-end with a small team, tons of compute, high autonomy, and the mandate to push toward 100B+ scale as we scale generalist speech models.
What you’ll do
- Design and implement a production-grade training stack for large-scale speech model pre-training and post-training (SFT/RLHF-style, distillation, preference optimization, etc.).
- Build scalable data + compute pipelines: dataset curation, filtering, mixing, tokenization/feature pipelines, evaluation harnesses, and experiment tracking.
- Own distributed training: performance profiling, stability, fault tolerance, checkpointing, resumption, and high-throughput I/O.
What we’re looking for
- Strong ML systems and engineering depth (distributed training, performance, reliability).
- Practical experience training large models (speech/audio is a plus but not required; language/vision experience is also relevant).
- Comfort operating in ambiguity: you can spec, build, debug, and ship.
To apply
Reply with your best paper / blog post / arXiv link and a short note on what you built and what you want to build next.