v0.3.5

AI Chat UI

Polished chat interface with streaming SSE responses, two-phase loading states, and audio input for speech-to-text.

Streaming with a custom SSE parser

Chat responses stream via Server-Sent Events with a purpose-built parser that handles real-world network conditions. A dedicated SSEStreamError class distinguishes server errors from JSON parse failures, preventing the catch-all anti-pattern where legitimate error responses get silently swallowed. The SSELineBuffer class handles TCP packet boundary splitting — when a single SSE line arrives across multiple reader.read() calls, the buffer reassembles it before parsing.

Server errors embedded in the SSE stream (e.g., {"error": "rate limit exceeded"}) surface immediately in the UI instead of producing empty chat bubbles. Both string and object error formats are handled.

Two-phase loading

The loading indicator operates in two distinct phases. Phase A covers the gap between the user pressing send and the server creating the assistant message — a standalone loading block appears below the user's message. Phase B begins once the assistant message exists and streaming starts, switching to an inline streaming indicator within the message bubble. This prevents the jarring flash of an empty bubble that single-phase loaders produce.

Audio input

Speech-to-text input uses the MediaRecorder API to capture audio directly in the browser. Recordings are transcribed server-side and injected into the chat input. The implementation uses a session counter (monotonic useRef integer) instead of a boolean ref for async cancellation — this prevents race conditions when recording sessions overlap, where a stale promise from session N could incorrectly proceed in session N+1.

Contributors

Sascha RahnSascha Rahn