
DAY 7 - MLflow Tracking
Day 7 of Phase 2: AI System Building focused on experiment tracking using MLflow. The objective was to log trained model runs, record parameters and evaluation metrics, and store model artifacts for reproducibility and comparison. Both Logistic Regression and Random Forest models were logged along with ROC-AUC scores, which were observed to be close to 1.0. During implementation, environment constraints in the shared/serverless workspace required specifying a Unity Catalog Volume path for temporary storage when logging Spark ML models. This highlighted how ML lifecycle management depends on infrastructure configuration, not just modeling logic. The exercise reinforced the importance of experiment traceability, artifact storage, and reproducibility in scalable AI workflows. It also clarified the difference between logging a model and registering it within a model registry. During troubleshooting and configuration, ChatGPT supported validation of MLflow setup and interpretation of lifecy
Continue reading on Dev.to Python
Opens in a new tab



