Back to articles
Your Backtest Is Lying to You — Walk-Forward Validation Catches Overfitting

Your Backtest Is Lying to You — Walk-Forward Validation Catches Overfitting

via Dev.to PythonKang

Your Backtest Is Lying to You — Walk-Forward Validation Catches Overfitting This is Part 3 of my series on building finclaw , an AI-native quant engine. Previously: Why GA Beat DRL and 127 Generations Later . 25,000% Annual Return? Sure, Bro. My genetic algorithm evolved a strategy with a fitness score of 291,623 and an annualized return of 25,000%. On paper, I'd outperform Medallion Fund by a factor of 300. In reality, my GA had memorized the training data like a student who stole the answer key. Here's the thing — I knew it was overfit. You know it's overfit. But how do you prove it, systematically, in code? And more importantly, how do you force your evolution engine to stop cheating? That's what walk-forward validation does. The Problem: One Split, One Lie The standard approach is one static train/test split: |========= TRAIN (70%) =========|==== TEST (30%) ====| Looks reasonable. But here's what actually happens after 50+ generations of genetic evolution: Generation 1: GA explores

Continue reading on Dev.to Python

Opens in a new tab

Read Full Article
0 views

Related Articles