Local LLM
🚀 Running DeepSeek R1 Distill Qwen &7b 8bit (mlx community) on my M1 Mac with 16gb memory - getting almost 23.139 tokens per second! Not bad for a 2-year-old Mac. No internet, No data leak risk, so total privacy. Plus there's no limit on the output, which is great. Planning to switch to using local code generation so that I can have unlimited generations. Anyone still paying for tokens?
#DeepSeek #LocalAI #PrivacyFirst #UnlimitedAI #M1Performance
12
6 comments
Imtiaz Hasan
3
Local LLM
AI Business Transformation
skool.com/netregie-ai-2617
𝗧𝗿𝗮𝗻𝘀𝗳𝗼𝗿𝗺 𝘆𝗼𝘂𝗿 𝗖𝗼𝗺𝗽𝗮𝗻𝘆 🦾 Unlock #𝗔𝗜 for your 𝗯𝘂𝘀𝗶𝗻𝗲𝘀𝘀! 🤖 𝗦𝗧𝗔𝗥𝗧 𝗬𝗢𝗨𝗥 𝗝𝗢𝗨𝗥𝗡𝗘𝗬 𝗡𝗢𝗪 ⤵️
Leaderboard (30-day)
Powered by