Founding Engineer at Attunement (W24)
$150 - $300
We automate compliance for behavioral health, saving clinicians hours and protecting millions in revenue.
San Francisco, CA, US
Full-time
Will sponsor
3+ years
About Attunement

Attunement (YC W24) is engineering observability and accountability into AI for behavioral health. We’re building secure infrastructure that connects clinical data pipelines, model outputs, and audit systems so every AI-assisted decision in care is traceable and explainable. Clinics using Attunement stay continuously audit-ready, protect revenue, and set a new bar for transparency in digital mental-health tools.

About the role
Skills: Python, React, Software Security, Amazon Web Services (AWS)

Location: Onsite / San Francisco

Stage: Seed

Type: Full-time, founding team

attunement.ai

Engineering Observability and Accountability into AI for Behavioral Health

Attunement is building the compliance infrastructure for AI in behavioral health. Our goal is to make AI systems in clinical settings auditable, explainable, and accountable by design.

Today, clinics using Attunement cut audit preparation time by 80% and documentation costs by 40%. We are building the technical standard for safety and integrity in AI-assisted behavioral care.

What You'll Do

As an early engineer, you’ll design and implement the technical foundation for compliant and reliable AI in healthcare. You’ll build systems with our forward deployment engineer and product designer to make compliance and transparency operational.

Your work will include:

  • The core compliance intelligence layer: secure, explainable, and continuously learning from real clinical workflows.
  • Data pipelines that connect with EHRs and healthcare APIs (FHIR, HL7) to create real-time, auditable feedback loops.

You Might Be Right for This If

  • You’ve built production systems end-to-end , backend to frontend in security-sensitive or regulated environments (HIPAA, SOC 2, or similar).
  • You’ve worked with healthcare data standards (FHIR, HL7, or EHR integrations) and understand the nuance of data lineage, auditability, and interoperability.
  • You have experience with LLMs or ML Ops, particularly in designing explainability, safety, or audit systems around AI models.
  • You’re fluent in React / Next.js and Python / FastAPI (or equivalent frameworks), with strong fundamentals in database architecture, API design, and observability.
  • You care deeply about reliability, data integrity, and user trust
  • (Bonus) You have a background or strong interest in clinical psychology, AI safety, or human-centered systems design, and you want to build software that genuinely improves human wellbeing.

Why this matters

This role shapes how AI systems are integrated into healthcare. You’ll collaborate with a founding team with backgrounds in neuroscience, AI safety, and clinical psychology to define the technical and ethical standards for responsible AI in clinical environments.

You’ll have meaningful ownership, early equity, and the opportunity to influence not only the product architecture but also the principles that govern how AI supports human decision-making in care.

Technology

Attunement (YC W24) is building the observability and accountability layer for AI in behavioral health: real-time infrastructure that makes model decisions explainable, auditable, and compliant by design. Engineers here work at the intersection of ML ops, healthcare data, and human-centered safety, defining how trustworthy AI is built and deployed in clinical systems.

Other jobs at Attunement

fulltimeSan Francisco, CA, USFull stack$150 - $3003+ years

Hundreds of YC startups are hiring on Work at a Startup.

Sign up to see more ›