The clinic that lets you make every mistake before they count.
Voice-first AI patient simulator for medical students and newly graduated doctors. Take the history, order labs, read imaging, diagnose and prescribe — talking to AI patients in real time. An attending physician powered by Claude Opus 4.7 grades you against published guidelines.
Standardised-patient training is the gold standard — but in most countries it's scheduled twice a year, costs the school real money, and you have to physically be there. Trainees globally get little or no access.
Patients walk in, you talk to them by voice, you order tests, you treat. An attending grader marks your communication, history-taking, and clinical reasoning — citing the actual guideline you should have followed.
A full case from triage to debrief — voice conversation with the patient, tests resolving, and Opus 4.7 reading the chart and grading the encounter.
Browser ↔ LiveKit ↔ Voice worker ↔ Opus 4.7 attending. Every key is server-side; the browser only ships your voice.
Browser publishes your mic over WebRTC. Real-time, low-latency, no buttons to hold.
Deepgram Nova-3 streams your speech to text. Claude Haiku 4.5 stays in character. Cartesia Sonic-2 streams the voice back.
Order labs, read imaging, prescribe meds, write the disposition note. ER tests resolve over simulated minutes; polyclinic tests resolve instantly.
Claude Opus 4.7, running as a Managed Agent, reads your chart and writes a rubric — citing NICE, ESC, AHA, GINA, GOLD. No fabricated sources.
New patients arrive at triage. Multiple beds run in parallel. Tests resolve over simulated minutes. You decide who to see next.
Calmer flow. Single patient, instant test resolution. Built for working through history-taking and reasoning without clock pressure.
Open-ended questions. Interruptions handled. The patient hesitates, asks for clarification, shows worry — like a person, not a chatbot.
Three-domain rubric: Data Gathering, Clinical Management, Interpersonal. Verdicts from excellent to clear-fail. Every feedback bullet links back to a guideline excerpt.
The frontend is a single Store class — no Redux, no Zustand. Backend splits cleanly: a small FastAPI for HTTP and a fatter Python worker for the LiveKit voice loop.
Single Store with useSyncExternalStore. Three.js scenes for ER and polyclinic rooms.
All API keys live here. Vite proxies /agent/* and /voice/* in dev.
Separate venv. Reads room metadata for persona + voice ID, dispatches into rooms minted by FastAPI.
Persistent agent (medkit-attending) bootstrapped once. Custom-tool UI renders the rubric card live.
"OSCEs are once a semester and I always freeze on the open-ended questions. I want to fail privately, in a browser, before failing in front of an examiner."
"I want to walk into the ER with reps under my belt. Talking to patients, deciding who's sick first, getting graded on whether I asked about red flags."
"I want my students to do twenty cases before they sit a single real one. I want the rubric to cite the guideline so I can argue with it if I disagree."
No signup. Mic permission, three minutes, your first patient is on the bench.