
NPB 2021 Backtest: Could a Bayesian Model Predict Last-Place-to-Champion?
Introduction In a previous article , I added Bayesian integration to my NPB prediction system. The 8-year backtest showed "97% probability of beating Marcel." But how did it perform in the worst year for predictions ? 2021 was NPB's biggest upset: both Yakult (CL) and Orix (PL) went from last place to champions. I ran a full backtest with 25 new foreign players individually projected using FanGraphs and Baseball Savant data. GitHub : npb-2021-backtest Main model : npb-prediction Team Standings: Predicted vs Actual Central League Team Actual Bayes (no foreign) Bayes (with foreign) Foreign Effect Yakult 73W (1st) 69.5W (4th) 70.7W (4th) +1.2W (Santana, Osuna) Hanshin 77W (2nd) 72.8W (2nd) 72.6W (2nd) -0.2W Giants 61W (3rd) 83.1W (1st) 84.3W (1st) +1.2W (Smoak, Thames) Pacific League Team Actual Bayes (no foreign) Bayes (with foreign) Foreign Effect Orix 70W (1st) 64.5W (6th) 62.0W (6th) -2.5W (worse) SoftBank 60W (4th) 77.6W (1st) 76.2W (1st) -1.4W MAE: 10.4 wins → 10.7 wins. Foreign pla
Continue reading on Dev.to Python
Opens in a new tab




