
Stop Stuffing Entire Files into LLMs — I Built a Surgical Context Extractor for Python
We’ve all done this. You’re refactoring a moderately complex function with an LLM. You paste the function in. The model produces a confident answer. It’s wrong. Because it doesn’t know about: a helper method in the same class a type definition declared above an enum imported from another module a factory function wrapping everything So you start manually expanding context: Copy the function Copy the helper Copy the imports Paste half the file Hit token limits Watch reasoning degrade At some point it becomes clear: The problem is not just model capability. It’s context density. The Core Issue: Signal vs Noise When working on real Python codebases (Django services, FastAPI backends, layered systems), I repeatedly ran into two structural issues. 1. The Blind Spot If you only send the active file, the model misses “one-hop” dependencies: private helpers internal utilities type aliases nearby definitions that shape logic It sees syntax but lacks structural understanding. 2. The Noise Floor
Continue reading on Dev.to Python
Opens in a new tab




