Backend MCP Engineer at LiteLLM (W23)
$120K - $180K  •  0.25% - 0.75%
Call every LLM API like it's OpenAI [100+ LLMs]
San Francisco, CA, US
Full-time
US citizen/visa only
1+ years
About LiteLLM

LiteLLM (https://github.com/BerriAI/litellm) is a Python SDK, Proxy Server (LLM Gateway) to call 100+ LLM APIs in OpenAI format - [Bedrock, Azure, OpenAI, VertexAI, Cohere] and is used by companies like Rocket Money, Adobe, Twilio, and Siemens.

About the role

TLDR

LiteLLM is an open-source LLM Gateway with 34K+ stars on GitHub and trusted by companies like NASA, Rocket Money, Samsara, Lemonade, and Adobe. We’re rapidly expanding and seeking our 6th Engineer focused on owning ‘excellence’ for MCP’s on LiteLLM.

What is LiteLLM

LiteLLM provides an open source Python SDK and Python FastAPI Server that allows calling 100+ LLM APIs (Bedrock, Azure, OpenAI, VertexAI, Cohere, Anthropic) in the OpenAI format

We just hit $7M ARR and have raised a $1.6M seed round from Y Combinator, Gravity Fund and Pioneer Fund. You can find more information on our website, Github and Technical Documentation.

Why do companies use LiteLLM enterprise

Companies use LiteLLM Enterprise once they put LiteLLM into production and need enterprise features like Prometheus metrics (production monitoring) and need to give LLM access to a large number of people with SSO (secure sign on) or JWT (JSON Web Tokens)

What you will be working on

Skills: Python, MCP, AI infrastructure, FastAPI

As the Backend MCP Engineer, you'll be responsible for implementing MCP server support, building tool orchestration layers, designing protocol for external tool integration, enabling function calling across multiple LLM providers, and creating SDK for MCP server discovery and connection. You'll work directly with the CEO and CTO on critical projects including:

  • Adding MCP protocol support to LiteLLM gateway
  • Building unified tool calling interface across providers
  • Implementing session management for stateful agents
  • Creating examples/docs for MCP + LiteLLM integration

What is our tech stack

  • Core: Python, FastAPI, MCP, Redis, Postgres.

  • LLM Integration: OpenAI SDK, Anthropic SDK, AWS Bedrock, Vertex AI

  • Protocol Layer: JSON-RPC, WebSockets, Server-Sent Events (SSE)

  • Agent Tooling: Model Context Protocol (MCP), function calling, tool schemas

  • Infrastructure: Docker, Kubernetes, Prometheus, GitHub Actions


You'll work with:

  • Multiple LLM provider APIs (Anthropic, OpenAI, Google, AWS)
  • MCP protocol implementation (client + server)
  • High-throughput async systems (10K+ req/sec)
  • Open source community (34K+ GitHub stars)

What’s so exciting about this role?

LiteLLM is at the intersection of 3 critical AI infrastructure layers:

1. LLM Gateway - Call any LLM with one API (our core strength)

2. MCP Gateway - Give any LLM access to any tool (emerging need)

3. Agent Gateway - Enable agents to communicate with other agents/llm’s/tools

You'll help us become the unified infrastructure layer that connects: 

  • Applications ↔ LiteLLM ↔ LLM Providers (OpenAI, Anthropic, Bedrock)
  • LLMs ↔ LiteLLM ↔ MCP Servers (databases, APIs, internal tools) 
  • Agents ↔ LiteLLM ↔ MCP Servers (databases, APIs, internal tools) + LLMs


This means working on cutting-edge problems like:

  • How do we route tool calls across providers with different specs?
  • How do we make MCP servers work seamlessly with any LLM?
  • How do we build the "Stripe of AI infrastructure"? If you're excited about building the foundational layer that every AI application will use, this is for you.

Who we are looking for

  • 1-2 years of backend/full-stack experience with production systems
  • Passion for open source and user engagement
  • Experience working with the OpenAI api (understand the difference between /chat/completions and /responses, and can speak to API-specific nuances)
  • Strong work ethic and ability to thrive in small teams
  • Eagerness to talk to users and help solve real problems
Technology

Founding engineer will help with migrating key systems to aiohttp, handle LLM provider-specific quirks like Azure role handling, standardize LLMs to OpenAI Spec.

  • Strong Python and async framework experience (e.g., aiohttp, FastAPI).
  • Familiarity with LLM APIs (OpenAI, Azure, Hugging Face)
  • Proven ability to optimize scalable, high-performance systems.
Interview Process

Interview Process

Our interview process is:

  • Intro call - 30 min

    Behavioral discussion about your working style, expectations, and the company’s direction.

  • Hackerrank - 1 hr

    A hackerrank covering basic python questions

  • Virtual Onsite - 3 hrs

    Virtual onsite with founders, which involves solving an issue on LiteLLM’s github together, a presentation of a technical project and a system design question

Other jobs at LiteLLM

fulltimeSan Francisco, CA, US$80K - $100KAny (new grads ok)

fulltimeSan Francisco, CA, US$80K - $120KAny (new grads ok)

fulltimeSan Francisco, CA, USBackend$120K - $180K0.25% - 0.75%1+ years

fulltimeSan Francisco, CA, USBackend$120K - $180K0.25% - 0.75%1+ years

fulltimeSan Francisco, CA, US$100K - $200K0.05% - 0.50%3+ years

Hundreds of YC startups are hiring on Work at a Startup.

Sign up to see more ›