
The Preprocessing Step You're Probably Skipping (And Why Your Model Is Paying for It)
You spend days collecting data. You pick the right architecture. You tune your learning rate. You train the model, check the metrics, and something feels off. The accuracy is decent but not great. The model struggles on images that look slightly darker, or slightly washed out, or taken in a different lighting condition than your training set. Most people go back and blame the model. Maybe more layers. Maybe a different backbone. Maybe more data. But the problem was never the model. It was the image before it ever reached the model. What the Model Actually Sees Before we get into the solution, it helps to understand the problem properly. A grayscale image is just a 2D grid of pixel intensity values, ranging from 0 (black) to 255 (white). A color image is three of these grids stacked together. When your model looks at an image, it is looking at these numbers. That is all. Now imagine you take a photo inside a dimly lit room. Most of the pixel values in that image cluster in the range of
Continue reading on Dev.to
Opens in a new tab


