Quick reminder for everyone building with AI automations: Most of the “big” models we use (ChatGPT, Claude, Gemini, etc.) are cloud-based and stop working the moment your connection drops or gets too weak to reliably hit the API.
That’s fine when you’re on solid fiber, but it can become a hard bottleneck if you’re traveling, working from client sites, or dealing with spotty Wi‑Fi.
If you want AI that still works when the internet doesn’t, look into local / offline AI runners like:Jan – desktop app that lets you run open‑source models locally and also connect to cloud models when you’re online.
LM Studio – GUI for downloading and running local LLMs on your machine.
Ollama – command‑line first, great for developers who want to script and chain local models.These tools let you:Keep a “backup AI” that still works during outages or weak connections.Do privacy‑sensitive work fully on‑device, with no data leaving your laptop.Prototype and run agents/workflows that don’t depend 100% on APIs being reachable 24/7.
Curious what everyone here is using as their offline / weak‑internet stack (models, runners, and workflows). What’s working best for you?