
Adding Bayesian Ensemble + Monte Carlo to an NPB Prediction System
Introduction In a previous article, I documented my journey adding Bayesian regression (Stan/Ridge) to my NPB (Japanese pro baseball) prediction system. Previous article : Beyond Marcel: Adding Bayesian Regression to NPB Predictions That work lived in a separate experiment repository ( npb-bayes-projection ). This article covers adding those pieces into the main app — a 7-phase process that touched 19 files and added 4,087 lines. GitHub : npb-prediction Live dashboard : npb-prediction.streamlit.app Before: Point Estimates Only Marcel (3-year weighted avg) → ML (XGBoost/LightGBM) ↓ ↓ Point estimate Point estimate ↓ Pythagorean Win% → Team standings Problems: No uncertainty quantification 24 new foreign players treated as league-average (wRAA=0) Marcel and ML run independently — no ensemble Team standings are a single number with no confidence interval After: Bayesian Ensemble + Monte Carlo Layer 1: Marcel (unchanged) ↓ Layer 2: Stan Bayesian correction - Japanese: Ridge correction via K
Continue reading on Dev.to Python
Opens in a new tab




