
Scoring HN discussions by practitioner depth, not popularity
HN gets 500+ stories a day. The front page is ranked by votes - which surfaces popular content, not necessarily the best discussions. A post about a Google outage will outrank a thread where infrastructure engineers are quietly sharing how they handle failover. sift tries to find the second kind. What "practitioner depth" means The scoring algorithm looks at five signals in the comment tree: Depth breadth (30% weight) - Not max depth. The fraction of comments at 3+ levels of conversation. A thread where 20% of comments are three replies deep means real back-and-forth happened. Practitioner markers (25%) - Comments containing experience phrases ("I built," "we used," "in production"), code blocks, specific tool names, or hedging language like "FWIW" and "YMMV" that correlates with practitioners. Score velocity (15%) - Points per hour. Sustained interest over time, not a spike. Author diversity (15%) - Unique authors relative to comment count, weighted by thread size. High diversity at d
Continue reading on Dev.to Webdev
Opens in a new tab




