Record, transcribe, diarize, and summarize meetings with local AI. No cloud servers, no API keys, no monthly subscriptions. Just one app that does everything on your hardware.
Mic + system audio via WASAPI loopback. Capture both sides of any call — Teams, Zoom, Discord.
faster-whisper large-v3-turbo with 4-6x batched speedup. A 1-hour meeting transcribes in ~20 seconds.
CPU-based speaker segmentation with LLM name resolution. No cloud tokens required.
Chunked summarization via local Ollama LLM. Beats context rot. Choose from 13 templates.
Periodic screenshots with Windows native OCR. Captures what was on screen during the meeting.
Ask your meeting history anything. ChromaDB vector search with conversational memory.
Frontmatter, wikilinks, Dataview-queryable. Meetings become searchable knowledge in your vault.
No network calls. No telemetry. Works air-gapped. HIPAA and GDPR compliant by architecture.
| Tier | GPU | VRAM | Experience |
|---|---|---|---|
| Minimum | GTX 1060 | 6 GB | Functional with smaller models |
| Recommended | RTX 3060 | 12 GB | Full quality, 32K context |
| Target | RTX 4080 SUPER | 16 GB | Maximum quality, 65K context |
| CPU-Only | None | — | Slower but functional (~45 min per 1hr meeting) |
Also requires: Windows 10/11 · 32 GB RAM recommended · Ollama installed
v1.0.0 · 71 MB · MIT License