Hey HN, I built Oval because I use Open WebUI as my self-hosted ChatGPT alternative but missed having a proper native Mac app for it.
What it does:
- Native SwiftUI macOS client that connects to any Open WebUI server
- On-device voice mode — STT and TTS run locally via RunAnywhere SDK (Whisper ONNX or WhisperKit on Apple Neural Engine), only the LLM call goes to your server
- Realtime transcription with speaker diarization (FluidAudio — Pyannote + WeSpeaker, all on-device)
Hey HN, I built Oval because I use Open WebUI as my self-hosted ChatGPT alternative but missed having a proper native Mac app for it.
What it does:
- Native SwiftUI macOS client that connects to any Open WebUI server
- On-device voice mode — STT and TTS run locally via RunAnywhere SDK (Whisper ONNX or WhisperKit on Apple Neural Engine), only the LLM call goes to your server
- Realtime transcription with speaker diarization (FluidAudio — Pyannote + WeSpeaker, all on-device)
- Sparkle auto-updates, keyboard shortcuts, floating voice panel
Why on-device voice?
Privacy — your audio never leaves your Mac. Latency — no round-trip for STT/TTS. And it works offline for transcription.
Stack: SwiftUI, RunAnywhere SDK (sherpa-onnx + WhisperKit), FluidAudio for diarization, Socket.IO + SSE for chat streaming.
Would love feedback, especially on the voice pipeline architecture.