Activity
Mon
Wed
Fri
Sun
Apr
May
Jun
Jul
Aug
Sep
Oct
Nov
Dec
Jan
Feb
Mar
What is this?
Less
More

Memberships

Your First $5k Club w/ARLAN

22.9k members • Free

AI Writing Easy AF for Authors

535 members • Free

The AI Founder’s Vault+

83 members • $37/m

AI Video Academy

129 members • Free

AI Brand Basics Membership

237 members • $5/month

AI Automation Society

298.9k members • Free

Automation Masters

3.8k members • Free

Zero2Launch AI Automation

5.5k members • $1/m

AI Business Bootcamp

1.2k members • Free

1 contribution to Zero2Launch AI Automation
🚀 Video: Run AI Chat Models in n8n — 100% FREE & Local
Hey Zero2Launch crew 👋 If you’ve ever wanted to run AI chat models inside n8n without paying for OpenAI or burning through tokens, this one’s for you. In this new step-by-step video, I’ll show you how to host LLMs locally using LM Studio, and connect them directly to n8n using open-source models like DeepSeek or Llama. 💡 What you’ll learn: 🤖 Install and run local LLMs with LM Studio 📥 Download DeepSeek or LLaMA models — totally free 🔌 Connect n8n’s Chat Model node to your local LLM 🧪 Test everything with live prompts — OpenAI-free That means: ✅ No API keys ✅ No cloud costs ✅ And 100% offline, full control over your AI workflows 👇 Got questions or want to share your setup? Drop it in the comments!
0 likes • Jun '25
I'm having this same problem with the container and the localhost connection, but I'm on Railway, not Docker. Anyone know how to fix that? I've tried all kinds of things in there, but haven't found what works yet. (I also tried installing Docker locally, that was a nightmare, couldn't get WSL to work no matter what.)
1-1 of 1
Kimberly Gordon
1
5points to level up
@kimberly-gordon-2031
I love to live great stories and tell them too.

Active 1h ago
Joined Jun 6, 2025