
What breaks when you try to send huge files directly between browsers
We built a file transfer service that keeps peer-to-peer transfers free. Not a "free tier with limits" free. Not "free until we get acquired" free. Just free, as long as two browsers can talk to each other directly. The catch is that we had to make browsers do things they were never designed to do. This is a write-up of what we learned pushing WebRTC and browser APIs far beyond their comfortable limits. TL;DR Browsers assume you will buffer files in memory, but that breaks fast Blob-based approaches OOM around ~2–4GB Chromium works best thanks to File System Access API Large fsyncs stall "completed" transfers for many minutes (~30mins for a 128GB file) Service Worker streaming avoids that entirely SCTP congestion control will silently kill throughput unless paced Tracking millions of pieces naively explodes memory Safari and Firefox impose real, unavoidable limits The problem: browsers were not built for this Peer-to-peer file transfer sounds deceptively simple: establish a WebRTC data
Continue reading on Dev.to Tutorial
Opens in a new tab



