
iPhone 17 Pro Just Ran a 400B LLM: On-Device AI Changes Everything (2026)
A developer just ran a 400-billion parameter large language model on an iPhone 17 Pro. Not on a server. Not through an API. Directly on the phone, with airplane mode on. The model is called Flash-MoE, an open-source project by @anemll . It generates text at 0.6 tokens per second — roughly one word every two seconds. That's glacially slow compared to cloud inference. But the fact that it runs at all on a device with 12GB of RAM is a genuine engineering breakthrough, and it signals something much bigger for the future of mobile AI. 📊 The numbers: 400 billion parameters. 12GB of RAM. 0.6 tokens/second. The model requires a minimum of 200GB of memory when compressed — the iPhone has 6% of that. Flash-MoE bridges the gap by streaming model weights from SSD to GPU on demand. This story hit Hacker News and sparked a heated debate about what "running" an LLM actually means, whether this is a stunt or a genuine preview of the future, and how far mobile hardware still needs to go. What Happened:
Continue reading on Dev.to
Opens in a new tab


