Want a REAL production-ready n8n deployment? In this video we break down the n8n-aiwithapex infrastructure stack and why it’s a massive upgrade over a “basic docker-compose n8n” setup.
You’ll see how this project implements a full queue-mode architecture with:
- n8n-main (Editor/API) separated from execution
- Redis as the queue broker
- Multiple n8n workers for horizontal scaling
- External task runners (isolated JS/Python execution) for safer Code node workloads
- PostgreSQL persistence with tuning + initialization
- ngrok for quick secure access in WSL2/local dev
We’ll also cover the “Ops” side that most tutorials ignore:
- Comprehensive backups (Postgres + Redis + n8n exports + env backups)
- Offsite sync + optional GPG encryption
- Health checks, monitoring, queue depth, and log management scripts
- Restore + disaster recovery testing so you can recover fast
- Dual deployment paths: WSL2 for local + Coolify for cloud/production
If you’re building automations for clients, running n8n for a team, or scaling AI workflows, this architecture is the blueprint: separation of concerns, isolation, scaling, and recoverability.
Youtube video:
Repo: