Nuc for running open source LLMs, Docker 24/7
**Hi everyone,** I recently purchased the **GMKtec NUCBox K9** (specs below) with the goal of running **70B open-source LLMs** locally. For around **$1,200**, this mini PC packs a punch with its **56W power efficiency**, making it a great choice for a dedicated AI/LLM assistant. My plan is to use it as a **24/7 personal assistant** for tasks like: - **News aggregation** and summarization. - **Vectorizing and updating** all my personal data into **Qdrant** for semantic search. - Running **HyperV**, **Docker** (for n8n workflows, Ollama, Open-WebUI, and TTS models), and other AI tools. One thing I didn’t fully realize is that the **70B model benchmarks** I saw weren’t tested with **Ollama** or **LM Studio**. However, there’s a **driver for the Intel GPU**, so I’m hopeful it’ll perform well. I’ll keep experimenting and see how it goes. **Long-term goal:** I want to create a **"Her"-like AI assistant** that I can talk to (with **TTS** support) and has **infinite memory** to remember all our interactions and learn from them. This NUC is my first step toward building that dream setup. **Questions for the community:** - Does anyone have a **similar setup** or experience with running large models on mini PCs? - Any tips or feedback on optimizing performance for this kind of workload? - Curious to hear about your approaches and ideas! Here’s the link to the NUC I bought: [GMKtec NUCBox K9](https://www.gmktec.com/products/intel-ultra-5-125h-mini-pc-nucbox-k9?srsltid=AfmBOoqh67RvSXhq8XTR6d7gv0Jh16x3OI8kzTyXwy7671PdEF5iLTXm&variant=2c517a3e-15dd-4cfc-a862-41dc0a7da684) Looking forward to your thoughts and suggestions! J