
Appends for AI apps: Stream into a single message with Ably AI Transport
Streaming tokens is easy. Resuming cleanly is not. A user refreshes mid-response, another client joins late, a mobile connection drops for 10 seconds, and suddenly your "one answer" is 600 tiny messages that your UI has to stitch back together. Message history turns into fragments. You start building a side store just to reconstruct "the response so far". This is not a model problem. It's a delivery problem That's why we developed message appends for Ably AI Transport . Appends let you stream AI output tokens into a single message as they are produced, so you get progressive rendering for live subscribers and a clean, compact response in history. The failure mode we're fixing The usual implementation is to stream each token as a single message, which is simple and works perfectly on a stable connection. In production, clients disconnect and resume mid-stream: refreshes, mobile dropouts, backgrounded tabs, and late joins. Once you have real reconnects and refreshes, you inherit work you
Continue reading on Dev.to
Opens in a new tab
