If you’ve ever felt like pulling your hair out while manually editing Nginx config files just to add one simple container, this is for you.
Modern infrastructure is dynamic, but our proxies are often static. In the old days, you’d spin up a server, it stayed there for three years, and you’d hardcode its IP into a config file (I remember days with an open spreadsheet of addresses 🤣😲). Today, we’re spinning up containers, scaling Kubernetes pods, and moving services around every five minutes. Keeping a proxy updated manually in that environment is messy, frustrating, and you’re bound to miss a spot.
The Nginx “Elephant” in the Room
Most people solve the issue by sticking with the “old reliable”: Nginx. Don’t get me wrong, Nginx is a beast when it comes to raw speed and load balancing (and also caching, snippets, manipulating requests and more!). But the workflow is clunky for container enthusiasts, especially those of us with changing environments.
If you’re using it as a Kubernetes Ingress, you’re often stuck with complex annotations or ad-hoc snippets that are starting to give security teams (and the Kubernetes maintainers) a real headache. I'm not even mentioning the SSL certificates provisioning and assignments for all downstream applications.
Why the Manual Grind Fails
The reason this doesn’t work for most of us is context switching, and maybe more importantly - the risk of human error (and no, I'm not encouraging AI). In the heat of a deployment, the last thing you want to do is jump out of your compose file or K8s yaml to go mess with a separate proxy configuration. It’s a friction point that leads to configuration drift, where your proxy thinks a service is at point A, but your orchestrator moved it to point B ten minutes ago.
Plus, let’s be honest: setting up SSL/TLS certificates manually in 2025 feels like using a rotary phone.
yes, I'm old enough to have used one, at home.
Let the Proxy Do the Heavy Lifting
How would I solve it differently? I use Traefik. Instead of you telling the proxy where the services are, Traefik listens to your infrastructure. It natively integrates with Docker and Kubernetes. When you launch a container with a simple label, Traefik sees it, creates the route, and starts diverting traffic instantly. No reloads, no manual config files, and no headaches.
What I Learned: The “Magic” of Auto-Discovery
What really blew me away about Traefik is that it lives up to the “it just works” hype.
It’s written in Go and built specifically for the chaos of dynamic environments. It’s not just a proxy, it’s an observer.
I found that the builtin dashboard is a game changer for sanity. Usually, with proxies, you’re flying blind unless you’ve set up a complex Grafana stack, or plug an open source for visuals. With Traefik, you get a clean UI out of the box that shows your routers, services, and middlewares in realtime.
A 60-Second Setup
Ready to stop manually plumbing your apps? Here is how you put this into action right now.
First, you set up the Traefik engine. Notice how we just point it at the Docker socket and tell it to start listening:
# docker-compose.yml for Traefik
services:
traefik:
image: traefik:v3.0
command:
- "--api.insecure=true"
- "--providers.docker=true"
- "--entrypoints.web.address=:80"
ports:
- "80:80"
- "8080:8080" # Dashboard
volumes:
- "/var/run/docker.sock:/var/run/docker.sock:ro"
Automating SSL with Let’s Encrypt
This is where the real power lies. Instead of managing .pem files, you tell Traefik to talk to Let’s Encrypt for you. You define a Certificate Resolver in your Traefik config:
Once that’s set, you never have to think about port 443 again. When you deploy an app, you just add one label to “ask” for a certificate. Traefik handles the challenge, proves you own the domain, fetches the cert, and even renews it automatically before it expires.
Just one catch: If you’re looking for high-availability (running multiple Traefik instances in a cluster sharing these certs), that’s where Traefik Enterprise comes in.
But for most of us, this built-in ACME automation is a lifesaver.
I've used Traefik in production for the simplest tasks, like single-handedly rerouting tens of thousands of requests per minute to another location, and all the way to a main K8s ingress controller.
It's been fantastic every single time.
If you want a proxy that feels like it was actually built for the way we work today- flexible, automated, and easy on the eyes, Traefik is the clear winner.
I hope this was valuable! Thank you for reading.
Feel free to reply directly with any question or feedback.
I replaced Docker with THIS. This issue is brought to you by: Graphite: The next generation of code review. Graphite is the AI code review platform where teams ship higher code, faster. Get started for FREE! You know why you’re here. Because reproducible environments make you tick but too much friction? makes you.. sick 🥁. After 12 years of containerizing / virtualenv-ing, I’ve finally found something that ticks all the boxes. I’m talking about throwing out npm, rvm, nix-env, virtualenv and...
Wait... NGINX can do WHAT?! This issue is brought to you by: Reliable DNS hosting & domain name managementWith DNSimple! From a streamlined interface to single-click integrations, DNSimple delivers the tools you need to simplify your day. Developers and system admins love our single-click integrations and automation tools for domains, DNS, and more. Enterprise teams simplify management of the most complex domain environments through our NEW Domain Control Plane. Try FREE for 30 days! Most...
Redis is Not What You Think It Is. This issue is brought to you by: Securing Vibe Coding: Addressing the Security Challenges of AI-Generated Code As AI coding tools become embedded in daily development, they bring a new wave of productivity, and new security risks. On November 20 @ 11AM EST, Snyk Staff Developer Advocate Sonya Moisset will break down the security implications of vibe coding and share actionable strategies to secure AI-generated code at scale. Attendees can earn 1 CPE credit...