
Why I Added an LLM Parser on Top of Vector Search (And What It Changed)
I thought vector search was enough. I'd built Queryra — an AI search plugin for WooCommerce and Shopify. Replaced keyword matching with semantic embeddings. Customers could search "something warm for winter" and find sweaters, fleece jackets, blankets. Zero results became rare. It worked. Then someone searched: "wireless headphones under $80, not Beats" The vector search returned wireless headphones. Some were $200. Several were Beats. The price cap and brand exclusion were completely invisible to the embedding model. That's when I realized: vector search was layer one. I was missing layer two. The Problem With Pure Vector Search Embeddings are brilliant at one thing: encoding semantic similarity. "Sneakers" lands close to "trainers" and "running shoes" in vector space. "Gift for dad" finds garden tools, BBQ sets, and watches — even without those words in the query. But a query like "laptop under $1000 for video editing, not Chromebook" contains two fundamentally different types of inf
Continue reading on Dev.to
Opens in a new tab


