Nuc for running open source LLMs, Docker 24/7
**Hi everyone,**
I recently purchased the **GMKtec NUCBox K9** (specs below) with the goal of running **70B open-source LLMs** locally. For around **$1,200**, this mini PC packs a punch with its **56W power efficiency**, making it a great choice for a dedicated AI/LLM assistant.
My plan is to use it as a **24/7 personal assistant** for tasks like:
- **News aggregation** and summarization.
- **Vectorizing and updating** all my personal data into **Qdrant** for semantic search.
- Running **HyperV**, **Docker** (for n8n workflows, Ollama, Open-WebUI, and TTS models), and other AI tools.
One thing I didn’t fully realize is that the **70B model benchmarks** I saw weren’t tested with **Ollama** or **LM Studio**. However, there’s a **driver for the Intel GPU**, so I’m hopeful it’ll perform well. I’ll keep experimenting and see how it goes.
**Long-term goal:** I want to create a **"Her"-like AI assistant** that I can talk to (with **TTS** support) and has **infinite memory** to remember all our interactions and learn from them. This NUC is my first step toward building that dream setup.
**Questions for the community:**
- Does anyone have a **similar setup** or experience with running large models on mini PCs?
- Any tips or feedback on optimizing performance for this kind of workload?
- Curious to hear about your approaches and ideas!
Here’s the link to the NUC I bought:
Looking forward to your thoughts and suggestions!
J
3
9 comments
Jan Hoedt
2
Nuc for running open source LLMs, Docker 24/7
powered by
5minAI
skool.com/5minai-2861
Build and launch AI Agents & web apps in hours.
Build your own community
Bring people together around your passion and get paid.
Powered by