I Was DEFINITELY Using The Wrong VPS Setup
When was the last time you thought up running something on a server?
It doesn't have to be something fancy. A side project. A utility. An open source that you always wanted to run on your own but just too lazy to get it off the ground so you.
I certainly have.
Even when I needed a local server, my immediate thought was "sure, I can just setup and K8s cluster on my raspberry pie an put whatever I want there" 🤦
After years of advocating for complex microservice architectures and fancy deployment patterns, I've had a humbling realization: many successful startups are making hundreds of thousands of dollars monthly with just a basic VPS running a few containers.
I'm not making this up.
Over the past couple of years I've been advising startups with their cloud infra, and seeing how they currently run was a humbling moment: they did VERY well with a bunch of containers on a server.
Sure, they could have used a few tweaks here and there. But overall? Pretty great...
The truth is, we often overlook simple, effective solutions while chasing after technical perfection.
Docker Compose has been sitting right under our noses as a capable production tool, but most of us (myself included) have been using it wrong.
Most teams miss Compose's ability to override and merge multiple configuration files—allowing you to maintain separate development and production environments without duplicating your entire setup.
In addition to minimizing human error, production compose allows for different env variables, ports and mounts, aligning perfectly with the most important set of rules of "12 Factor Apps".
To put this knowledge into action, start by creating a base docker-compose.yml
for development that includes your mounted volumes and development-specific settings.
Then create a production.yml
that overrides only what needs to change (like removing those mounted volumes and using pre-built images).
When deploying, simply run compose with both files:
docker compose -f docker-compose.yml -f production.yml up -d
.
For even better resilience, add services to systemd with restart=always
to ensure your containers restart after crashes or server reboots!
(read more about 'compose in production')
The Problem With Our Current Deployments
Most development teams face a frustrating disconnect between their local development environment and production.
Even those that do use compose in production, fail to leverage it for local dev setups.
Sometimes, it's the other way around: they use it locally, but then in production use an over-engineered hyper-scale system that's nearly impossible to maintain and isn't helping at the team's stage and in-house skills.
Locally, Docker Compose makes things easy—preset environment variables, dependency management, isolated networks, and the ability to start/stop everything with a single command.
But when it's time to deploy to production, teams often abandon this simplicity for complex orchestration solutions that introduce unnecessary overhead.
The common approach is either running Compose files directly in production (missing critical production configurations) or jumping straight to heavyweight solutions like Kubernetes.
Both approaches miss the mark—one is too simplistic and fragile, while the other is often overkill for smaller applications.
The Hidden Solution
The key insight that most teams miss is that Docker Compose already has built-in capabilities for production deployment. Using Compose's override functionality lets you maintain separate configurations while reusing most of your setup.
❗Tip: Running with --no-deps
allows for replacing individual containers without affecting others.
Adding restart: always
creates a basic watchdog to restart crashed containers automatically.
For even more resilience, integrating Compose with systemd ensures your entire application stack restarts if the server reboots.
"But we actually do need some autoscaling"
And when you need features like rolling updates, load balancing, and auto-scaling, there's Docker Swarm (which is built into Docker itself) waiting as your next step up.
When your application needs to scale beyond a single server, Docker Swarm provides a natural progression. By adding simple deploy
and replicas
entries to your existing Compose file, you can deploy the same application with multiple instances. Commands like docker service scale guestapp=3
let you adjust capacity on the fly without complex reconfigurations.
Remember that fancy, complex infrastructure doesn't necessarily translate to business success. As I've witnessed firsthand, sometimes a single VPS with a well-configured Docker Compose setup is all you need to build a profitable business.
Start simple, use Compose's hidden production features, and graduate to Swarm when you truly need the additional orchestration capabilities.
Now you know how to launch a production-grade VPS, leverage Compose's hidden functionality for production servers, and transition to Swarm when it's time for real scale.
Sometimes the best solutions are hiding in plain sight—we just need to look at our existing tools with fresh eyes (and no ego)...
Thank you for reading.
Feel free to reply directly with any question or feedback.
Have a great weekend!
Whenever you’re ready, here’s how I can help you:
|
|