
SmartKNN vs Classical KNN: Regression Benchmark Results
It’s been a while since I revisited KNN-style models for regression, so I decided to run a clean benchmark. No tricks. No tuning wars. Just default settings and fair comparison. This post summarizes how SmartKNN performs against classical KNN variants across multiple real-world datasets. Benchmark Setup 14 regression datasets All models run with default settings No dataset-specific tuning Final ranking based on average R² score Models compared: SmartKNN KNN (Manhattan) KNN (KDTree) KNN (BallTree) KNN (Distance) KNN (Uniform) KNN (Chebyshev) Final Ranking (Average Performance) Rank Model Avg R² Avg RMSE Avg MAE 1 SmartKNN 0.708249 18727.286422 10333.612683 2 KNN_manhattan 0.701272 18268.360893 10060.939069 3 KNN_balltree 0.692006 19154.367392 10651.626496 4 KNN_kdtree 0.692002 19154.366302 10651.625834 5 KNN_distance 0.691661 19154.367327 10651.626319 6 KNN_uniform 0.685943 19250.752618 10746.872163 7 KNN_chebyshev 0.668124 20885.061901 11864.294204 Key Takeaways SmartKNN ranked #1 over
Continue reading on Dev.to Python
Opens in a new tab

![[MM’s] Boot Notes — The Day Zero Blueprint — Test Smarter on Day One](/_next/image?url=https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1368%2F1*AvVpFzkFJBm-xns4niPLAA.png&w=1200&q=75)

