Aquarium helps deep learning teams improve their model performance by improving their datasets.
A model is only as good as the dataset it’s trained on. We help teams find problems with their datasets + models and fix them by editing / adding data to their datasets.
Aquarium is making it easier for teams to build and improve their ML models.
As a full stack software engineer, you will drive development of our core user-facing application. Our current tech stack is a modern, rich frontend experience, built on React/Typescript/WebGL on the frontend, with a primarily Python backend. In addition to building out a well-engineered application, you’ll also be heavily involved in product iterations. Machine learning projects involve many different people (ML Researchers, ML Engineers, Product Managers, Operations, etc.), and the right product will understand and support everyone involved.
What you will do
What you should have
Aquarium’s technology relies on letting your trained ML model do the work of guiding what parts of your dataset to pay attention to.
For example, Aquarium finds examples where your model has the highest loss / disagreement with your labeled dataset, which tends to surface many labeling errors (ie, the model is right and the label is wrong!).
Users can also provide their model's embeddings for each entry, which are an anonymized representation of what their model “thought” about the data. The neural network embeddings for a datapoint encode the input data into a relatively short vector of floats. We can then identify outliers and group together examples in a dataset by analyzing the distances between these embeddings. We also provide a nice thousand-foot-view visualization of embeddings that allows users to zoom into interesting parts of their dataset. (https://youtu.be/DHABgXXe-Fs?t=139). We heavily use React, WebGL, Python, and Apache Beam in our day-to-day work.
Think about this as a platform for interactive learning. By focusing on the most “important” areas of the dataset that the model is consistently getting wrong, we increase the leverage of ML teams to sift through massive datasets and decide on the proper corrective action to improve their model performance.
Our goal is to build tools to reduce or eliminate the need for ML engineers to handhold the process of improving model performance through data curation - basically, Andrej Karpathy’s Operation Vacation concept (https://youtu.be/g2R2T631x7k?t=820) as a service.