★ Built in 3 days · Opus 4.7 hackathon

medkit

The clinic that lets you make every mistake before they count.

Voice-first AI patient simulator for medical students and newly graduated doctors. Take the history, order labs, read imaging, diagnose and prescribe — talking to AI patients in real time. An attending physician powered by Claude Opus 4.7 grades you against published guidelines.

3 days
to build, hackathon
Opus 4.7
grader · clinical reasoning
5
guideline registries cited
2
flows · ER + polyclinic
the problem

OSCE practice is rare, expensive, and not portable.

Standardised-patient training is the gold standard — but in most countries it's scheduled twice a year, costs the school real money, and you have to physically be there. Trainees globally get little or no access.

the fix

An AI clinic, on demand, in the browser.

Patients walk in, you talk to them by voice, you order tests, you treat. An attending grader marks your communication, history-taking, and clinical reasoning — citing the actual guideline you should have followed.

demo · 4 min

See the attending physician at work.

A full case from triage to debrief — voice conversation with the patient, tests resolving, and Opus 4.7 reading the chart and grading the encounter.

DEMO TAPE · OPUS 4.7 HACKATHON
Open the simulator → Mic permission needed · Chrome / Edge recommended
how it works

Four moving parts, one clinic.

Browser ↔ LiveKit ↔ Voice worker ↔ Opus 4.7 attending. Every key is server-side; the browser only ships your voice.

1
🎙️

You speak. The patient answers.

Browser publishes your mic over WebRTC. Real-time, low-latency, no buttons to hold.

livekit-client
2
🧠

The patient is real-time AI.

Deepgram Nova-3 streams your speech to text. Claude Haiku 4.5 stays in character. Cartesia Sonic-2 streams the voice back.

deepgram → haiku 4.5 → cartesia
3
🩺

You diagnose, treat, dispose.

Order labs, read imaging, prescribe meds, write the disposition note. ER tests resolve over simulated minutes; polyclinic tests resolve instantly.

two flows · ER + polyclinic
4
📋

The attending grades you.

Claude Opus 4.7, running as a Managed Agent, reads your chart and writes a rubric — citing NICE, ESC, AHA, GINA, GOLD. No fabricated sources.

opus 4.7 · medkit-attending
what's inside

Two flows, one source of truth.

ER MODE

Multi-bed shift

New patients arrive at triage. Multiple beds run in parallel. Tests resolve over simulated minutes. You decide who to see next.

  • Real-time triage queue
  • Concurrent voice conversations
  • Time-pressure decision making
POLYCLINIC MODE

One outpatient at a time

Calmer flow. Single patient, instant test resolution. Built for working through history-taking and reasoning without clock pressure.

  • Adult and pediatric patients
  • Parent speaks for the child
  • Three.js consult room
VOICE-FIRST

Talk like you would in clinic

Open-ended questions. Interruptions handled. The patient hesitates, asks for clarification, shows worry — like a person, not a chatbot.

  • Streaming STT/TTS
  • Patient persona prompt-cached
  • Lip-sync-ready audio analyser
ATTENDING GRADER

Cited, structured, scored

Three-domain rubric: Data Gathering, Clinical Management, Interpersonal. Verdicts from excellent to clear-fail. Every feedback bullet links back to a guideline excerpt.

  • NICE · ESC · AHA · GINA · GOLD
  • No fabricated citations
  • Saved to your training log
architecture

Three processes, two venvs, one game state.

The frontend is a single Store class — no Redux, no Zustand. Backend splits cleanly: a small FastAPI for HTTP and a fatter Python worker for the LiveKit voice loop.

FRONTEND
React 18 · TypeScript · Vite
Three.js (@react-three/fiber + drei)
livekit-client (WebRTC)

Single Store with useSyncExternalStore. Three.js scenes for ER and polyclinic rooms.

HTTP BACKEND
FastAPI · 127.0.0.1:8787
Managed Agents proxy
/voice/token JWT mint

All API keys live here. Vite proxies /agent/* and /voice/* in dev.

VOICE WORKER
livekit-agents (Python)
Deepgram Nova-3 STT
Claude Haiku 4.5 (persona)
Cartesia Sonic-2 TTS

Separate venv. Reads room metadata for persona + voice ID, dispatches into rooms minted by FastAPI.

ATTENDING
Claude Opus 4.7
Anthropic Managed Agent
curated guideline registry

Persistent agent (medkit-attending) bootstrapped once. Custom-tool UI renders the rubric card live.

model routing

Right model for the job.

Patient persona (in-character voice reply)
Haiku 4.5
Fast, cheap, good enough.
medkit-attending (clinical grading)
Opus 4.7
Reasoning, precision, citations.
Demo narration
Opus 4.7
One-off, polish matters.
who it's for

Built for the trainee who can't wait for the next OSCE.

Aisha · 4th-year med student

"OSCEs are once a semester and I always freeze on the open-ended questions. I want to fail privately, in a browser, before failing in front of an examiner."

Cem · just-graduated, first ER shift next month

"I want to walk into the ER with reps under my belt. Talking to patients, deciding who's sick first, getting graded on whether I asked about red flags."

Lina · clinical educator

"I want my students to do twenty cases before they sit a single real one. I want the rubric to cite the guideline so I can argue with it if I disagree."

why this matters

The training gap is global.

~1–2×
per year — typical OSCE frequency in most curricula
$300–$600
cost per standardised-patient encounter at scale
Zero
access for trainees in many low-resource regions
cases medkit can run per week. Browser only.
contact

Say hi to Bedirhan.

Builder behind medkit. Medical-doctor-turned-software-engineer. Open to feedback, bug reports, and collaboration.

ready?

Walk into the clinic.

No signup. Mic permission, three minutes, your first patient is on the bench.