Founding Engineer at Exa Laboratories (S24)
$100K - $400K  •  0.25% - 2.00%
Energy efficient chips for AI
San Francisco, CA, US
Full-time
Any (new grads ok)
About Exa Laboratories

Building energy-efficient chips ("XPUs") for AI training and inference.

Our XPUs are reconfigurable, capable of optimizing the dataflow of each model, making them faster and more energy-efficient than the current SOTA GPUs on the market. This saves data centers billions in cooling and energy costs.

About the role
Skills: Prototyping, Tcl, C, Git, Rust, Verilog, VHDL, GPU Programming, Machine Learning, Firmware

We're Building the Next Generation of Chips to Power AI. Join Us.

At Exa, we're building the next NVIDIA. Our novel polymorphic chips are a generation ahead of anything on the market. We're essentially building the substrate that will power all future knowledge and scientific discovery.

Our XPU chips are state-of-the-art AI compute engines capable of reconfiguring themselves to optimize the dataflow of each model (training & inference!) to be fast and efficient enough to support AGI, and eventually ASI, without requiring massive power infrastructure. Through our novel polymorphic architecture, we're achieving unprecedented performance gains over current SOTA GPUs while dramatically reducing energy consumption.

The team consists of exceptional engineers obsessed with pushing the boundaries of what's possible in computing and AI, and we're now seeking our next founding engineer.

You Are

  • Ready to go all-in and do the work of your life
  • Willing to be hardcore when pushing technical boundaries
  • A technical powerhouse who loves working across the hardware-software boundary
  • Deeply passionate and obsessed with computing and AI
  • Hungry to build something that actually matters

Ideal Background

  • Recent or upcoming graduate in Electrical Engineering, Computer Engineering, or equivalent field
  • Experience with semiconductor physical design (PD) and chip tapeout through research/projects
  • Strong foundation in digital design, VLSI, and RTL
  • Exposure to (Sci)ML frameworks (e.g., PyTorch/TinyGrad/JAX/Lux.jl)
  • Experience with systems programming:
    • Firmware development
    • Linux kernel modules and device drivers
    • Low-level hardware interfaces

Huge Plus If

  • Autodidactic polymath
  • Strong mathematical background
  • Someone who doesn't fret when faced with near-impossible technical challenges

The Opportunity

  • Be one of the first employees shaping a revolutionary technology
  • Work directly with the founding team of exceptional engineers
  • Own critical decisions that will influence the future of AI compute
  • Grow into a technical leader as we scale
  • Highly competitive compensation + significant equity

This is THE chance to do the work of your life. The chance to build something that will be remembered. To go hardcore on a technical moonshot that will actually matter for over 100 to 1,000 years.

Technology

Our XPUs are capable of optimizing the dataflow of each specific model architecture, making them faster and more energy-efficient than the current SOTA GPUs.

The major bottleneck in computing is memory (often referred to as the "von Neumann bottleneck"). Moving data from point A to B is very expensive in both time and energy. A good rule of thumb is 1 bit = 1 pJ, and modern AI models are >100 GB in parameters.

With our polymorphic architecture, we essentially have less data movement, and we can perform larger parts of the AI models' computation in one go with the help of function compositions and keeping them localized. Since we can also perform non-linear tensor operations in parallel, this greatly improves throughput, speed, and energy efficiency.

Depending on the model architecture, we can achieve speedups from 2.5x to 10,000x (or more).

Other jobs at Exa Laboratories

fulltimeSan Francisco, CA, USElectrical$100K - $400K0.25% - 2.00%Any (new grads ok)

Hundreds of YC startups are hiring on Work at a Startup.

Sign up to see more ›