Aquarium helps deep learning teams improve their model performance by improving their datasets.
A model is only as good as the dataset it’s trained on. We help teams find problems with their datasets + models and fix them by editing / adding data to their datasets.
As a Frontend Platform Software Engineer on the Aquarium team, you’ll be responsible for the foundations of our core application: a webapp for machine learning teams to visualize their datasets and collaborate on operational workflows. In order to support data intensive workflows in the browser, such as large scale interactive visualizations and complex query systems, we need tooling and systems that balance performance with internal developer experience. You will also serve as a technical resource and mentor for other frontend developers, guiding them to best practices and raising the bar of our frontend codebase.
Like all engineers on the team, you will also contribute to product feature development. You’ll talk to users, absorb feedback from the sales process, and move us towards a better product.
What you will do
What you should have
Machine learning is eating the world. However, though it’s easier than ever to build a prototype of an ML system, it’s still extremely difficult to build, maintain, and improve ML systems in production to solve real-world problems. Aquarium helps teams ship better ML models faster to enable the next generation of revolutionary AI applications.
Aquarium is backed by top investors including Y Combinator and Sequoia Capital. Our customers span many industries, from robotics to agriculture to construction. We’re looking to grow our team with awesome people who’ll shape the future of Aquarium -- both as a product and as a company.
Aquarium’s technology relies on letting your trained ML model do the work of guiding what parts of your dataset to pay attention to.
For example, Aquarium finds examples where your model has the highest loss / disagreement with your labeled dataset, which tends to surface many labeling errors (ie, the model is right and the label is wrong!).
Users can also provide their model's embeddings for each entry, which are an anonymized representation of what their model “thought” about the data. The neural network embeddings for a datapoint encode the input data into a relatively short vector of floats. We can then identify outliers and group together examples in a dataset by analyzing the distances between these embeddings. We also provide a nice thousand-foot-view visualization of embeddings that allows users to zoom into interesting parts of their dataset. (https://youtu.be/DHABgXXe-Fs?t=139). We heavily use React, WebGL, Python, and Apache Beam in our day-to-day work.
Think about this as a platform for interactive learning. By focusing on the most “important” areas of the dataset that the model is consistently getting wrong, we increase the leverage of ML teams to sift through massive datasets and decide on the proper corrective action to improve their model performance.
Our goal is to build tools to reduce or eliminate the need for ML engineers to handhold the process of improving model performance through data curation - basically, Andrej Karpathy’s Operation Vacation concept (https://youtu.be/g2R2T631x7k?t=820) as a service.