Show HN: A deterministic middleware to compress LLM prompts by 50-80%
Hi HN, I’m working on Skillware, an open-source framework that treats AI capabilities as installable, self-contained modules. I just added a "Prompt Token Rewriter" skill. It’s an offline heuristic middleware that strips conversational filler and redundant context from long agentic loops before they hit the LLM. It saves significant token costs and inference time, and it's 100% deterministic (no extra model calls). We're building a registry of "Agentic Know-How" (Logic + Cognition + Governance). If you have a specialized tool for LLMs or want to see what a "standard" skill looks like, I'd love your feedback or a PR: https://github.com/ARPAHLS/skillware Comments URL: https://news.ycombinator.com/item?id=47468104 Points: 5 # Comments: 0
Continue reading on Hacker News
Opens in a new tab
![[Learning notes and hw] getting started with R-cnn: Manually implementing Intersection over Union (IoU)](/_next/image?url=https%3A%2F%2Fmedia2.dev.to%2Fdynamic%2Fimage%2Fwidth%3D800%252Cheight%3D%252Cfit%3Dscale-down%252Cgravity%3Dauto%252Cformat%3Dauto%2Fhttps%253A%252F%252Fdev-to-uploads.s3.amazonaws.com%252Fuploads%252Farticles%252Favit2emoxc0g68e5ltqj.jpg&w=1200&q=75)



