
I built a pre-flight check tool for PyTorch, because silent failures are the worst kind
Last month I was debugging a training run that produced suspiciously bad results. The loop ran fine. No errors. No crashes. Just a model that learned nothing useful. After three days of debugging I found it: the validation set had samples from the training set. Label leakage. The model had been cheating the entire time and I had no idea. That was the moment I decided to build preflight . What is preflight? preflight is a CLI tool you run before your training loop starts. It catches the silent failures that waste GPU time — the bugs that don't crash Python but quietly ruin your model. pip install preflight-ml preflight run --dataloader my_dataloader.py Output: preflight — pre-training check report ╭────────────────────────┬──────────┬────────┬──────────────────────────────────────────────────╮ │ Check │ Severity │ Status │ Message │ ├────────────────────────┼──────────┼────────┼──────────────────────────────────────────────────┤ │ nan_inf_detection │ FATAL │ PASS │ No NaN or Inf values
Continue reading on Dev.to Python
Opens in a new tab
