
I Built a Tool to Distribute Python Tasks Across Local Machines. Here's How It Performed
I wanted to answer a simple question: how hard is it to split a Python workload across multiple machines on the same network? Not with a cloud cluster, not Kubernetes, just a few laptops on the same WiFi, sharing the work. So I built distributed-compute-locally to find out. The goal was maximum simplicity if it takes more than a few lines of code to set up, I've failed. Then I benchmarked it against industry-standard tools to see how it holds up. The API from distributed_compute import Coordinator coordinator = Coordinator () coordinator . start_server () results = coordinator . map ( my_func , data ) On any other machine on the network: pip install distributed-compute-locally distcompute worker 192.168.1.100 That's all you have to do. A coordinator distributes tasks over TCP, workers execute them with cloudpickle , and results come back in order, same as Python's built-in map() . ┌─────────────┐ TCP/5555 ┌──────────┐ │ Coordinator │◄────────────────►│ Worker 1 │ │ (your PC.). │◄──────
Continue reading on Dev.to
Opens in a new tab


