At Mem0, we are charting new territory that will fundamentally reshape how AI systems understand and interact with users over time. Our proprietary memory engine will allow AI models to dynamically build context, remember past interactions, and tailor their responses in a customized way for each individual. This represents a seismic leap beyond the current stateless limitations of AI.
Role Summary:
Own the 0→1. You’ll turn vague customer use cases into working proofs-of-concept that showcase what Mem0 can do. This means rapid full-stack prototyping, stitching together AI tools, and aggressively experimenting with memory retrieval approaches until the use case works end-to-end. You’ll partner closely with Research and Backend, communicate trade-offs clearly, and hand off winning prototypes that can be hardened for production.
What You'll Do:
Build POCs for real use cases: Stand up end-to-end demos (UI + APIs + data) that integrate Mem0 in the customer’s flow.
Experiment with memory retrieval: Try different embeddings, indexing, hybrid search, re-ranking, chunking/windowing, prompts, and caching to hit task-level quality and latency targets.
Prototype with Research: Implement paper ideas and new techniques from scratch, compare baselines, and keep what wins.
Create eval harnesses: Define small gold sets and lightweight metrics to judge POC success; instrument demos with basic telemetry.
Integrate AI tooling: Combine LLMs, vector DBs, Mem0 SDKs/APIs, and third-party services into coherent workflows.
Collaborate tightly: Work with Backend on clean contracts and data models; with Research on hypotheses; share learnings and next steps.
Package & handoff: Write concise docs, scripts, and templates so Engineering can productionize quickly.
Minimum Qualifications
Full-stack fluency: Next.js/React on the front end and Python backends (FastAPI/Django/Flask) or Node where needed.
Strong Python and TypeScript/JavaScript; comfortable building APIs, wiring data models, and deploying quick demos.
Hands-on with the LLM/RAG stack: embeddings, vector databases, retrieval strategies, prompt engineering.
Track record of rapid prototyping: moving from idea → demo in days, not months; clear documentation of results and trade-offs.
Ability to design small, meaningful evaluations for a use case (quality + latency) and iterate based on evidence.
Excellent communication with Research and Backend; crisp specs, readable code, and honest status updates.
Nice to Have:
Model serving/fine-tuning experience (vLLM, LoRA/PEFT) and lightweight batch/async pipelines.
Deployments on Vercel /serverless, Docker, basic k8s familiarity; CI for demo apps.
Data visualization and UX polish for compelling demos.
Prior Forward-Deployed/Solutions/Prototyping role turning customer needs into working software.
About Mem0
We're building the memory layer for AI agents. Think long-term memory that enables AI to remember conversations, learn from interactions, and build context over time. We're already powering millions of AI interactions. We are backed by top-tier investors and are well capitalized.
Our Culture
Office-first collaboration - We're an in-person team in San Francisco. Hallway chats, impromptu whiteboard sessions, and shared meals spark ideas that remote calls can't.
Velocity with craftsmanship - We build for the long term, not just shipping features. We move fast but never sacrifice reliability or thoughtful design - every system needs to be fast, reliable, and elegant.
Extreme ownership - Everyone at Mem0 is a builder-owner. If you spot a problem or opportunity, you have the agency to fix it. Titles are light; impact is heavy.
High bar, high trust - We hire for talent and potential, then give people room to run. Code is reviewed, ideas are challenged, and wins are celebrated—always with respect and curiosity.
Data-driven, not ego-driven – The best solution wins, whether it comes from a founder or an engineer who joined yesterday. We let results and metrics guide our decisions.
We are using state of the art Gen AI technologies and inventing some novel algorithms which helps us model information in the way our human brain does.
fulltimeSan Francisco Bay Area / RemoteFull stack$150K - $180K0.10% - 0.20%3+ years
fulltimeIndia / Remote (IN)Full stack₹4M - ₹5M INR0.05%3+ years
fulltimeSan Francisco Bay Area / RemoteFull stack$150K - $180K0.10% - 0.20%3+ years
fulltimeIndia / Remote (IN)Full stack₹2.5M - ₹3.5M INR0.05%3+ years
fulltimeSan Francisco Bay Area / Remote$150K - $180K0.10% - 0.15%3+ years
fulltimeSan Francisco Bay Area / RemoteFull stack$165K - $195K0.05% - 0.10%3+ years
fulltimeIndiaFull stack₹4M - ₹5M INR0.05%6+ years
fulltimeSan Francisco Bay AreaFull stack$175K - $210K0.10% - 0.20%3+ years
fulltimeSan Francisco Bay Area / RemoteFull stack$150K - $180K0.10% - 0.20%3+ years
fulltimeSan Francisco, CA, US / RemoteFull stack$150K - $180K0.10% - 0.20%3+ years
fulltimeSan Francisco Bay Area / RemoteFull stack$150K - $180K0.10% - 0.20%3+ years
fulltimeSan Francisco Bay Area / Remote (IN)Full stack₹3M - ₹4.5M INR0.05%3+ years