Founding Backend Engineer at LiteLLM (W23)
$160K - $220K  •  0.50% - 3.00%
Call every LLM API like it's OpenAI [100+ LLMs]
San Francisco, CA, US
Full-time
US citizen/visa only
1+ years
About LiteLLM

LiteLLM (https://github.com/BerriAI/litellm) is a Python SDK, Proxy Server (LLM Gateway) to call 100+ LLM APIs in OpenAI format - [Bedrock, Azure, OpenAI, VertexAI, Cohere] and is used by companies like Rocket Money, Adobe, Twilio, and Siemens.

About the role
Skills: Python, Natural Language Processing, PostgreSQL

TLDR

LiteLLM is an open-source LLM Gateway with 28K+ stars on GitHub and trusted by companies like NASA, Rocket Money, Samsara, Lemonade, and Adobe. We’re rapidly expanding and seeking a founding full-stack engineer to help scale the platform. We’re based in San Francisco.

What is LiteLLM

LiteLLM provides an open source Python SDK and Python FastAPI Server that allows calling 100+ LLM APIs (Bedrock, Azure, OpenAI, VertexAI, Cohere, Anthropic) in the OpenAI format

We just hit $2.5M ARR and have raised a $1.6M seed round from Y Combinator, Gravity Fund and Pioneer Fund. You can find more information on our website, Github and Technical Documentation.

Why do companies use LiteLLM enterprise

Companies use LiteLLM Enterprise once they put LiteLLM into production and need enterprise features like Prometheus metrics (production monitoring) and need to give LLM access to a large number of people with SSO (secure sign on) or JWT (JSON Web Tokens)

What you will be working on

Skills: Python, LLM APIs, FastAPI, High-throughput/low-latency

As a Founding Backend Engineer, you'll be responsible for ensuring LiteLLM unifies the format for calling LLM APIs in the OpenAI spec. This involves writing transformations to convert API requests from OpenAI spec to various LLM provider formats. You'll work directly with the CEO and CTO on critical projects including:

  • Migrating key systems from httpx to aiohttp for 10x higher throughput
  • Adding support for Anthropic and Bedrock Anthropic 'thinking' parameter
  • Handling provider-specific quirks like OpenAI o-1 streaming limitations
  • Scaling aggregate spend computation for 1M+ logs
  • Implementing cost tracking and logging for Anthropic API

What is our tech stack

The tech stack includes Python, FastAPI, JS/TS, Redis, Postgres, S3, GCS Storage, Datadog, and Slack API.

Who we are looking for

  • 1-2 years of backend/full-stack experience with production systems
  • Passion for open source and user engagement
  • Experience scaling high-performance infrastructure
  • Strong work ethic and ability to thrive in small teams
  • Eagerness to shape growing infrastructure
Technology

Founding engineer will help with migrating key systems to aiohttp, handle LLM provider-specific quirks like Azure role handling, standardize LLMs to OpenAI Spec.

  • Strong Python and async framework experience (e.g., aiohttp, FastAPI).
  • Familiarity with LLM APIs (OpenAI, Azure, Hugging Face)
  • Proven ability to optimize scalable, high-performance systems.
Interview Process

Interview Process

Our interview process is:

  • Intro call - 30 min

    Behavioral discussion about your working style, expectations, and the company’s direction.

  • Hackerrank - 1 hr

    A hackerrank covering basic python questions

  • Virtual Onsite - 3 hrs

    Virtual onsite with founders, which involves solving an issue on LiteLLM’s github together, a presentation of a technical project and a system design question

Other jobs at LiteLLM

fulltimeSan Francisco, CA, US / Remote (US)Backend$150 - $2000.50% - 3.00%Any (new grads ok)

fulltimeSan Francisco, CA, USBackend$160K - $220K0.50% - 3.00%1+ years

fulltimeSan Francisco, CA, USFull stack$160K - $220K0.50% - 1.50%1+ years

fulltimeSan Francisco, CA, US$100K - $200K0.05% - 0.50%3+ years

contractSan Francisco, CA, US / Remote$40K - $60K1+ years

Hundreds of YC startups are hiring on Work at a Startup.

Sign up to see more ›