Backend Performance Engineer at LiteLLM (W23)
$150 - $200  •  0.50% - 3.00%
Call every LLM API like it's OpenAI [100+ LLMs]
San Francisco, CA, US / Remote (US)
Full-time
US citizen/visa only
Any (new grads ok)
About LiteLLM

LiteLLM (https://github.com/BerriAI/litellm) is a Python SDK, Proxy Server (LLM Gateway) to call 100+ LLM APIs in OpenAI format - [Bedrock, Azure, OpenAI, VertexAI, Cohere] and is used by companies like Rocket Money, Adobe, Twilio, and Siemens.

About the role
Skills: Python, Rust

TLDR

LiteLLM is an open-source LLM Gateway with 28K+ stars on GitHub and trusted by companies like NASA, Rocket Money, Samsara, Lemonade, and Adobe. We’re rapidly expanding and seeking a performance engineer to help scale the platform to handle 5K RPS (Requests per second). We’re based in San Francisco.

What is LiteLLM

LiteLLM provides an open source Python SDK and Python FastAPI Server that allows calling 100+ LLM APIs (Bedrock, Azure, OpenAI, VertexAI, Cohere, Anthropic) in the OpenAI format

We just hit $2.5M ARR and have raised a $1.6M seed round from Y Combinator, Gravity Fund and Pioneer Fund. You can find more information on our website, Github and Technical Documentation.

About the Role

We're hiring a Python performance engineer to own maximizing throughput, minimizing latency and ensuring our platform is reliable in production.

Roadmap for Performance Engineer:

  • By end of this year our RPS and latency overhead should be at parity with industry benchmarks. Cover stream + non-stream for /chat/completions, /completions, /embeddings, /realtime, /audio/transcriptions
    • Reduce e2e overhead latency for cache misses. Currently at 100ms-500ms - ensure we meet industry standards.
    • Reduce e2e overhead latency for cache hits - ensure we meet industry benchmarks.
    • Ensure overhead latency scales well when other components are added to the platform - e.g Redis, Redis Cluster, DB, Non-Admin Virtual Keys
    • Ensure overhead latency scales well with payload size - 1MB prompt with streaming should be sub 100ms
  • Address customer specific and pipeline specific latency issues.
    • e.g. Enterprise customers reporting high overhead - this person should be able to debug these issues, get on support calls and help address any environment specific settings.
  • Address paying customer memory leaks
    • Enterprise clients have ongoing memory leaks that need resolution
  • Longer term - should add coverage over new endpoints - /realtime, /audio/transcriptions/, /audio/speech
Technology

Founding engineer will help with migrating key systems to aiohttp, handle LLM provider-specific quirks like Azure role handling, standardize LLMs to OpenAI Spec.

  • Strong Python and async framework experience (e.g., aiohttp, FastAPI).
  • Familiarity with LLM APIs (OpenAI, Azure, Hugging Face)
  • Proven ability to optimize scalable, high-performance systems.

Other jobs at LiteLLM

fulltimeSan Francisco, CA, US / Remote (US)Backend$150 - $2000.50% - 3.00%Any (new grads ok)

fulltimeSan Francisco, CA, USBackend$160K - $220K0.50% - 3.00%1+ years

contractSan Francisco, CA, US / Remote$40K - $60K1+ years

fulltimeSan Francisco, CA, US$100K - $200K0.05% - 0.50%3+ years

fulltimeSan Francisco, CA, USFull stack$160K - $220K0.50% - 1.50%1+ years

Hundreds of YC startups are hiring on Work at a Startup.

Sign up to see more ›