Attunement (YC W24) is engineering observability and accountability into AI for behavioral health. We’re building secure infrastructure that connects clinical data pipelines, model outputs, and audit systems so every AI-assisted decision in care is traceable and explainable. Clinics using Attunement stay continuously audit-ready, protect revenue, and set a new bar for transparency in digital mental-health tools.
Location: Onsite / San Francisco
Stage: Seed
Type: Full-time, founding team
Attunement is building the compliance infrastructure for AI in behavioral health. Our goal is to make AI systems in clinical settings auditable, explainable, and accountable by design.
Today, clinics using Attunement cut audit preparation time by 80% and documentation costs by 40%. We are building the technical standard for safety and integrity in AI-assisted behavioral care.
As an early engineer, you’ll design and implement the technical foundation for compliant and reliable AI in healthcare. You’ll build systems with our forward deployment engineer and product designer to make compliance and transparency operational.
Your work will include:
This role shapes how AI systems are integrated into healthcare. You’ll collaborate with a founding team with backgrounds in neuroscience, AI safety, and clinical psychology to define the technical and ethical standards for responsible AI in clinical environments.
You’ll have meaningful ownership, early equity, and the opportunity to influence not only the product architecture but also the principles that govern how AI supports human decision-making in care.
Attunement (YC W24) is building the observability and accountability layer for AI in behavioral health: real-time infrastructure that makes model decisions explainable, auditable, and compliant by design. Engineers here work at the intersection of ML ops, healthcare data, and human-centered safety, defining how trustworthy AI is built and deployed in clinical systems.
fulltimeSan Francisco, CA, USFull stack$150 - $3003+ years