Back to articles
Ceph Public Network Migration (No Downtime)
NewsTools

Ceph Public Network Migration (No Downtime)

via Dev.to TutorialZepher Ashe

Ceph Public Network Migration (Proxmox) 172.16.0.0/16 → 10.50.0.0/24 No service downtime, no data loss 📌 Context This procedure documents a live Ceph public network migration performed on a Proxmox-backed Ceph cluster. The goal was to eliminate management-network congestion while maintaining cluster availability and data integrity. 🎯 Objective 🧱 Key Concepts (Read Once) 🚨 Troubleshooting ⚠️ Risks Considered ✅ Final State 🎯 Objective Migrate all Ceph traffic (MON, MGR, MDS, OSD front + back) from a congested management network to a dedicated Ceph fabric (e.g. 2.5 GbE switch), while keeping the cluster healthy and online. 🧱 Key Concepts (Read Once) public_network Client ↔ OSD traffic MON / MGR control plane CephFS metadata traffic cluster_network OSD ↔ OSD replication & recovery (data plane) Important behaviours MON & MGR enforce address validation OSDs bind addresses at restart /etc/pve/ceph.conf is not authoritative alone — Ceph also uses its internal config database 1️⃣ Prepare the Ne

Continue reading on Dev.to Tutorial

Opens in a new tab

Read Full Article
2 views

Related Articles