Activity
Mon
Wed
Fri
Sun
Oct
Nov
Dec
Jan
Feb
Mar
Apr
May
Jun
Jul
Aug
Sep
What is this?
Less
More

Memberships

Home Lab Explorers

805 members • Free

AI Automation Society

144k members • Free

AI Income Blueprint

4.8k members • Paid

6 contributions to Home Lab Explorers
Anyone looking to get the Minisforum S1 Max mini PC with AMD Strix Halo??
This new mini PC looks to be rack mountable [New Release] MS-S1 MAX – Ryzen™ AI Max+ 395 Mini Workstation | Minisf. I created a forum post about this one here as well: Minisforum MS-S1 Max with 128 GB of RAM and AMD Strix Halo HX processor – Mini PCs – VHT Forum
2 likes • 2d
That looks slick. Nice that you can plug the cord straight into it instead of having a brick to deal with too. I designed and printed my own rack mounts for the MS-01/MS-A2 that allow the face to sit flush. Unfortunately, I didn't see any measurements on the product page to allow me to update my design.
🔥What is everyone learning? Let me know what I need to be looking at!
Hey everyone! I hope all is well. What are you learning? I love getting ideas from you all. Let me know what I need to look at next. Any projects or tools you have discovered recently?
1 like • Aug 17
I got my 3 minisforum ms-a2’s racked in a rackmate t2 using custom 3D printed mounts. I also printed some mounts to hold two switches, one for main networking, the other for a dedicated ceph network. Next on my list is some cable management. The topic really tickling my brain though is combining services I am running in docker. I have n8n and nextcloud running and I’m thinking about adding gitea or some other git version control system. I have been trying to decide if it’s better to leave them all on their own vms or to combine them into a single docker host with multiple containers. Ultimately, it feels like it comes down to how I want to administrator my lab
Proxmox Cluster build, questions
Hi All, I am working on building a proxmox cluster for my homelab. The cluster is made up of 3 MS-A2's, each with 64 GB RAM with 1TB +4TB NVMe SSD's. Proxmox is installed on the 1TB drive for all nodes. I am curious what other people's opinions would be for setting up Ceph vs non Ceph for the cluster. Ceph would be using the 4TB drives. I would like to use the USB4 connections for the Ceph Network (linked between the 3 nodes) similar to what Jeff has done here: https://www.youtube.com/watch?v=TAWZawNdw1k I dont have anything currently running that would require the High Availability that Ceph provides. Most of what would be running on the homelab would be a couple of small servers and a large handful of windows systems for playing with AD and windows administration. It would be nice to mostly seamlessly be able to move VMs around though. The other thought I had about using standard clustering with HA would be that I would have a lot more storage space on each node to use for hosting more VMs or storing various amounts of data. From my (basic) research, it also looks like using Ceph would allow me to take snapshots where as using the 4tb drives as local storage would not. Snapshots would be a cool to have as well. As this is really my first journey into learning about clustering, thoughts or ideas would be appreciated.
MS-01 m.2 SSDs
For those using the MS-01, would you be willing to share the largest m.2 SSD you are running? I want to get a couple of SSDs to put in mine but have seen mixed stories that some people are running different sizes of SSDs but there may be a height component as well. Thanks for any input you may be able to provide.
3 likes • May 21
@John Lohman Thanks, What size are you using? I was hoping to use something in the 4-8 tb range.
Hardware poll
What hardware do you use in your homelab
Poll
17 members have voted
2 likes • May 21
unfortunately, i don’t have space for a full size rack, which has lead to the rabbit hole of 10 inch racks. now to fire up the 3d printers
1-6 of 6
Zach Lummis
2
5points to level up
@zach-lummis-7975
Looking to learn about AI and blogging to share my hobbies and hopefully make a little too

Active 1d ago
Joined May 20, 2025
Powered by