
Web Adapter Tool Agent: Turn Self-Learning Skills into "98% Average Token Reduction on Revisits," Measured
Originally published on 2026-03-09 Original article (Japanese): Web→Adapter→Tool→Agent: 自己学習型スキルで『再訪を実測で平均98%トークン削減』する If you build web data extraction by having an LLM read raw HTML every time and "just figure it out," it usually ends up expensive, slow, and brittle. It gets worse for use cases that revisit the same site repeatedly - news monitoring, documentation tracking, price change detection, and so on. You end up repeating the same failure modes over and over. Problems like this are often better solved not with ever more heroic scraping tricks, but by accepting a simpler approach: once an extraction method works, freeze it as a reusable tool and keep using it from then on. This article summarizes a design that turns scraping into a learned tool through a Web→Adapter→Tool→Agent transformation pipeline. The original inspiration was web2cli ( GitHub repository ), which I introduced in an earlier article. If you take the idea of "Every website is a Unix command" and push it toward a
Continue reading on Dev.to
Opens in a new tab



