Hey Zero2Launch crew 👋
If you’ve ever wanted to run AI chat models inside n8n without paying for OpenAI or burning through tokens, this one’s for you.
In this new step-by-step video, I’ll show you how to host LLMs locally using LM Studio, and connect them directly to n8n using open-source models like DeepSeek or Llama.
💡 What you’ll learn:
🤖 Install and run local LLMs with LM Studio
📥 Download DeepSeek or LLaMA models — totally free
🔌 Connect n8n’s Chat Model node to your local LLM
🧪 Test everything with live prompts — OpenAI-free
That means:
✅ No API keys
✅ No cloud costs
✅ And 100% offline, full control over your AI workflows
👇 Got questions or want to share your setup? Drop it in the comments!