Software Engineer (ML infrastructure)
$60k - $85k • 0.10% - 0.50%
We’re building a strong software engineering team and are at an important inflection point. As early members or our engineering team, you’ll be responsible for developing the early versions of our application, designing and implementing robust and scalable development practices and will help set the direction of our product, culture and company.
Skills: Torch/PyTorch, ML, Python, TensorFlow
About the role
At Neuro we’ve built the API for serverless ML compute. We abstract the entire infrastructure required to efficiently train and deploy your models so that ML engineers can focus on ML, not servers.
We’re building a strong engineering team and are at an important inflection point. Engineers are responsible for developing our application in accordance with our roadmap and customer needs, and for designing and implementing robust and scalable development practices. Engineers will set the direction of our product, culture and company.
What you’ll be working on
- You’ll be involved in all aspects of integrating our API into the wider MLOps landscape. You might specialise in some areas more than others but your work will likely include:
- Developing features to enhance and support our product and our customers (mostly using Python);
- Guiding the direction of Neuro’s API to meet our customer needs;
- Evangelising Neuro in the ML community; and
- Building tools to optimise internal performance.
- BS/MS/PhD in Computer Science or a related field;
- 4+ years of professional software engineering experience;
- Experience programming in Python;
- Experience with PyTorch, Tensorflow and other deep learning and classical ML frameworks;
- Experience with products in the cloud infrastructure and MLOps landscape; and
- The existing right to work in the UK.
Not essential but nice to have
- Experience working with cloud technologies (Docker, Kubernetes, AWS, GCP, Azure, etc.);
- Experience with building large scale data pipelines (Kafka, Spark, Hadoop, Airflow, etc.);
- Experience contributing to open source projects and/or related communities; and
- Experience with software security and data sensitive applications.
The job is onsite at our lovely offices in Bath however some flexible working is allowed.
We originally created a chip architecture to optimise ML operations. We built it on top of FPGAs and built our own compiler and programming language to communicate with our custom hardware. We then went into building our current product, an API that enables ML engineers to train and deploy their models in a server-less manner. We work with Python and built our own infrastructure pipeline to enable server-less GPU compute. We care about using the right programming language for each application.