Product Engineering Intern - Remote Part Time at Besimple AI (X25)
$6K - $10K / monthly
Expert-in-the-loop eval data for AI
US / Remote (US)
Internship
US citizen/visa only
About Besimple AI

Why Us

At Besimple AI, we’re making it radically easier for teams to build and ship reliable AI by fixing the hardest part of the stack: data. Good evaluation, training and safety data require domain experts, robust tooling and meticulous QA. AI teams and labs come to us to get high quality data so they can launch AI safely. We’re a YC X25 company based in Redwood City, CA, already powering evaluation and training pipelines for leading AI companies across customer support, search, and education. Join now to be close to real customer impact, not just demos.

Why This Matters

High-quality, human-reviewed data is still the single biggest driver of model quality, but most teams are stuck with old tools and legacy processes that do not scale to modern, multimodal, agentic workflows. Besimple replaces that mess with instant custom UIs, tailored rubrics, and an end-to-end human-in-the-loop workflow that supports text, chat, audio, video, LLM traces, and more. We meet teams where they are—whether they need on-prem deployments and granular user management or a fast cloud setup—to turn evaluation into a continuous capability rather than a one-time project.

Traction & Customers

Who You’ll Work With

Founders previously built the annotation platform that supported Meta’s Llama models. We’ve seen how world-class annotation systems shape model quality and iteration speed; we’re bringing those lessons to every AI team that needs to ship with confidence. You’ll work directly with the founders and users, owning problems end-to-end—from an interface that unlocks a tough rubric, to a workflow that reduces disagreement, to a AI judge system that improves quality.

How We Work

  • Bias to shipping and learning with customers
  • Respect for craft: calibration, rubric clarity, inter-annotator agreement (IRR)
  • Tight feedback loops from production back to evaluation
  • Ownership: you’ll shape evaluation as an engineering discipline with real “fail-to-ship” tests tied to business and safety goals

If you’re excited by systems that combine product design, human judgment, and applied AI—and you want to build the data and evaluation layer that keeps AI trustworthy—come build with us. See how fast teams can go from raw logs to a robust, human-in-the-loop eval pipeline—and how that changes the way they ship AI.

About the role

About Besimple AI

We are a safety data research company. Our mission is to bring AI into the real world safely. We believe that AI can meaningfully empower humanity only if we put safety first. We’re a small, nimble team of passionate builders who believe humans must remain in the loop.

The Role

We’re hiring a high-agency Product Engineering Intern to build and ship real product: fast, beautiful landing websites, full-stack features, and AI-agent workflows used daily by customers. You’ll work directly with the founding team and CTO Bill Wang who had been at Meta for over 7 years. You will own scoped projects end-to-end, and ship to production frequently. Remote, part-time with potential to convert to full-time.

What You’ll Do

  • Build landing sites that convert: fast, responsive pages with strong UX, SEO, and analytics (A/B tests, event tracking, funnels).
  • Ship full-stack features: from DB/APIs to polished UI. Own scoping, implementation, testing, and release.
  • Prototype & wire AI agents: tools/function-calling, eval harnesses, RAG/retrieval, safety/guardrails.
  • Dogfood & iterate: instrument, measure, fix, and improve product quality; write lightweight tests; add observability.
  • Collaborate: work with the founding team; participate in code reviews/RFCs; document decisions.

Minimum Qualifications

  • Landing-site portfolio: links showing fast, responsive, accessible marketing sites with solid UX + instrumentation.
  • Full-stack fundamentals: comfortable across frontend + backend; strong TypeScript/JavaScript; Git fluency.
  • Hands-on with AI agents: built something that calls tools/functions, retrieves context, or runs simple evals (school/personal/work).
  • Ownership & speed: bias to ship, clean code, iterate; self-managed.
  • Quality mindset: add tests when it matters, watch metrics, fix regressions, leave code better than you found it.
  • Communication: clear written/spoken English; proactive updates and crisp PRs/RFCs.

Nice-to-Haves

  • Next.js App Router, server actions, edge functions; shadcn-ui polish
  • Design sense (Figma) and component systems; accessibility (a11y)
  • Prior startup or open-source contributions

Logistics

  • Schedule: 15–25 hrs/week (flexible)
  • Location: Remote (U.S.). Occasional travel to SF Bay Area as needed
  • Eligibility: Studying or recently graduated from a U.S./Canadian university; US work authorization or CPT/OPT-eligible

Benefits

  • Competitive stipend benchmarked to SF remote internships
  • Potential full-time conversion with stock options
  • Remote-first, high-trust culture; real product ownership
  • Letter of recommendation to support your next role

Diversity Statement

Besimple AI is committed to a diverse, inclusive culture. We’re an equal-opportunity employer and welcome applicants from all backgrounds.

Technology

Technology & Hard Problems

Product Surface

Besimple generates task-specific annotation interfaces and guidelines on the fly, runs human-in-the-loop (HITL) workflows at scale, and trains AI judges that learn from human decisions to triage easy cases and flag ambiguous ones. We support multimodal data (text, chat, audio, video, traces) and enterprise needs like on-prem deployment and fine-grained access control. Under the hood, we optimize for latency, correctness, and adaptability—simultaneously.

Hard Technical Problems We’re Tackling

  • Generative UI for Any Data Shape Turn arbitrary inputs—JSON logs, multi-turn dialogs, code diffs, speech transcripts, video frames—into ergonomic, versioned UIs with validation and assistive affordances (schema inference, promptable components, live preview with safe defaults).
  • Human-in-the-Loop Orchestration Route tasks to the right experts, enforce calibration and quality gates, measure IRR, and run adjudication when disagreement is informative—not noise.
  • AI-Judge Training & Control Distill human rubrics into model-based evaluators that score live traffic, self-update with new human decisions, and stay inside guardrails (confidence thresholds, policy constraints, auditability).
  • Production-Grade Eval Build gating suites and regression tests aligned to product KPIs and safety constraints; snapshot datasets; track drift; and plumb production signals back into evaluation and training.
  • Enterprise Delivery On-prem optional installs, isolation-by-tenant, SSO/RBAC, and audit trails that satisfy infosec without slowing iteration.

What You’ll Own

End-to-end slices of the product—e.g., building a new multimodal interface, designing a calibration workflow that improves IRR, shipping a rubric-aware AI judge for a new domain, or tightening dataset lineage so a customer can trace a production decision back to ground truth.

Why This Is a Great Fit for Builders

This work sits at the intersection of product engineering, systems design, and applied AI. You’ll ship tangible interfaces, shape evaluation science, and see your work block real regressions. The feedback loop is measured in better models in production, not vanity benchmarks.

Interview Process

We will keep it simple! After resume screening, we will invite selected candidates for a coding challenging. Based on the results from the coding challenging, we will invite you to a short interview, and that’s it!

Other jobs at Besimple AI

fulltimeRedwood City, CA, US / Remote (US)$70K - $100K3+ years

internUS / Remote (US)Full stack$6K - $10K / monthlyAny

Hundreds of YC startups are hiring on Work at a Startup.

Sign up to see more ›