A Python + n8n pipeline that scans GCP resources daily, identifies waste, generates actionable reports, and auto-applies safe optimizations — with a human approval gate for anything risky.
cloud cost reduction
in direct savings
enabled via right-sizing
A startup's GCP bill had been growing 15% month-over-month for six months, with no clear explanation. The engineering team had no visibility into which resources were actually being used versus idle. Oversized VM instances from a load test three months ago were still running. Reserved IPs sat unattached. Persistent disks remained after clusters were deleted.
Nobody had time to audit infrastructure manually. Every sprint was focused on feature delivery, and the cloud bill was treated as an invisible cost — until it crossed $30K/month and finance raised the alarm.
I built a two-layer system: a Python scanning layer that interrogates the GCP API in depth, and an n8n orchestration layer that schedules scans, processes reports, and handles the approval workflow for any changes. Every morning, the system produces a prioritized list of optimization opportunities. Safe changes — like releasing unattached IPs or deleting empty disks — are applied automatically. Anything that could affect running workloads goes through a Slack approval gate before execution.
Within 30 days of deployment, monthly GCP spend dropped from $30K to $18K — a 40% reduction. The bulk of savings came from right-sizing oversized instances and cleaning up forgotten resources that had accumulated over 18 months of rapid growth.
If your cloud bill is growing without clear explanation, I can audit your infrastructure and build a system that catches waste automatically — before it compounds.
Start a conversation →