If you hate unnecessary reading, you can skip to the Github repo here.
The end goal here is a cheaper load balancer that can be run on a $5 Digitalocean droplet. Significant savings compared to the load balancers Digitalocean offers for $10/mo. In addition, this software also automatically provisions more droplets to scale your website up and down. With the use of floating IPs, the load balancer droplet can be turned off completely. Then a single web server droplet can host your site during down periods. This is ideal for low-traffic blogs with occasionally viral content (hello).
This service is intended to be run on an Nginx droplet. The installation turns that droplet into a load balancer pointing at your web servers on Digitalocean. The quintessential server.py script will automatically provision more droplets for that load balancer, as traffic scales up and down.
After my Endgame timeline post, an influx from Reddit and Gizmodo crashed my website. In response, I started using a load balancer. But Digitalocean’s options for a load balancer doubled my website budget. Not to mention I still had to manually clone the web servers. So I set out to create a Digitalocean-compatible horizontal scaler running on a cheaper droplet that would juggle the load balancer and automate the provisioning of cloned web servers.
My website is using this software right now. In the interest of throwing SecOps out the window, I’ve decided to open source it.
- Setting a cap on the number of droplets in your web cluster. The scaler will stop provisioning droplets once it hits this ceiling. Otherwise, a DDoS attack could leave you with a hefty bill.
- Specifying a custom snapshot to create your web servers. The repository even includes a script to scrape your Digitalocean account for snapshot IDs.
- Customizing the load per droplet. This uses the /proc/loadavg file, so your web servers must run Linux. By default, the scaler will provision another droplet once the load exceeds 1. In my personal testing, I’ve found 2 works just fine. If your web servers are multicore, set the load higher, of course.