LiteLLM (https://github.com/BerriAI/litellm) is a Python SDK, Proxy Server (LLM Gateway) to call 100+ LLM APIs in OpenAI format - [Bedrock, Azure, OpenAI, VertexAI, Cohere] and is used by companies like Rocket Money, Adobe, Twilio, and Siemens.
LiteLLM is an open-source LLM Gateway with 34K+ stars on GitHub and trusted by companies like NASA, Rocket Money, Samsara, Lemonade, and Adobe. We’re rapidly expanding and seeking our 6th Engineer focused on owning ‘excellence’ for MCP’s on LiteLLM.
LiteLLM provides an open source Python SDK and Python FastAPI Server that allows calling 100+ LLM APIs (Bedrock, Azure, OpenAI, VertexAI, Cohere, Anthropic) in the OpenAI format
We just hit $7M ARR and have raised a $1.6M seed round from Y Combinator, Gravity Fund and Pioneer Fund. You can find more information on our website, Github and Technical Documentation.
Why do companies use LiteLLM enterprise
Companies use LiteLLM Enterprise once they put LiteLLM into production and need enterprise features like Prometheus metrics (production monitoring) and need to give LLM access to a large number of people with SSO (secure sign on) or JWT (JSON Web Tokens)
Skills: Python, MCP, AI infrastructure, FastAPI
As the Backend MCP Engineer, you'll be responsible for implementing MCP server support, building tool orchestration layers, designing protocol for external tool integration, enabling function calling across multiple LLM providers, and creating SDK for MCP server discovery and connection. You'll work directly with the CEO and CTO on critical projects including:
Core: Python, FastAPI, MCP, Redis, Postgres.
LLM Integration: OpenAI SDK, Anthropic SDK, AWS Bedrock, Vertex AI
Protocol Layer: JSON-RPC, WebSockets, Server-Sent Events (SSE)
Agent Tooling: Model Context Protocol (MCP), function calling, tool schemas
Infrastructure: Docker, Kubernetes, Prometheus, GitHub Actions
You'll work with:
What’s so exciting about this role?
LiteLLM is at the intersection of 3 critical AI infrastructure layers:
1. LLM Gateway - Call any LLM with one API (our core strength)
2. MCP Gateway - Give any LLM access to any tool (emerging need)
3. Agent Gateway - Enable agents to communicate with other agents/llm’s/tools
You'll help us become the unified infrastructure layer that connects:
This means working on cutting-edge problems like:
Founding engineer will help with migrating key systems to aiohttp, handle LLM provider-specific quirks like Azure role handling, standardize LLMs to OpenAI Spec.
Our interview process is:
Intro call - 30 min
Behavioral discussion about your working style, expectations, and the company’s direction.
Hackerrank - 1 hr
A hackerrank covering basic python questions
Virtual Onsite - 3 hrs
Virtual onsite with founders, which involves solving an issue on LiteLLM’s github together, a presentation of a technical project and a system design question
fulltimeSan Francisco, CA, US$80K - $100KAny (new grads ok)
fulltimeSan Francisco, CA, US$80K - $120KAny (new grads ok)
fulltimeSan Francisco, CA, USBackend$120K - $180K0.25% - 0.75%1+ years
fulltimeSan Francisco, CA, USBackend$120K - $180K0.25% - 0.75%1+ years
fulltimeSan Francisco, CA, US$100K - $200K0.05% - 0.50%3+ years