
How I Run Web Scrapers for Free Using GitHub Actions (Complete Setup)
I needed to scrape pricing data from 5 websites every day. A VPS would cost $5-20/month. GitHub Actions costs $0. Here's my exact setup — including cron scheduling, data storage, and error alerts. Why GitHub Actions for Scraping? Free: 2,000 minutes/month on free tier (enough for most scrapers) Scheduled: Cron syntax, runs automatically No server: No VPS, no Docker, no deployment Built-in storage: Commit results to the repo Error alerts: GitHub notifies you if a run fails Step 1: The Scraper Script Create scraper.py in your repo: import httpx import json from datetime import datetime from pathlib import Path def scrape_prices (): targets = [ { ' name ' : ' Product A ' , ' url ' : ' https://api.example.com/product/123 ' }, { ' name ' : ' Product B ' , ' url ' : ' https://api.example.com/product/456 ' }, ] results = [] for target in targets : try : response = httpx . get ( target [ ' url ' ], timeout = 30 ) data = response . json () results . append ({ ' name ' : target [ ' name ' ], ' p
Continue reading on Dev.to DevOps
Opens in a new tab




