15d (edited) • Hardware
New toy new possibilities
A few days ago, I welcomed my new toy: an Aoostar WTR Max 8845HS. I must say, the hardware layout of this PC/NAS is impressive. It’s an 11-bay box, including 5x NVMe slots, and four intel nics 2spf+10gb intel 710 and 2 x2.5 I226-V nics , no pci but ...oculink. I managed to get 80GB of ECC memory through the Aoostar site. Unfortunately, 96GB wasn't available, so 64GB runs in dual channel while the remaining 16GB does not. Still, I figured 80GB is better than 64GB. I already have a Lenovo P520 running Proxmox, but since I have more time on my hands now that I’m retired, I thought this would be a fun project.
My plan: install Proxmox and run PBS (Proxmox Backup Server) and TrueNAS Scale as VMs, using PCI passthrough for the SATA controller and the remaining 4 NVMe drives for Proxmox or as a ZFS cache for TrueNAS. The possibilities with this box are endless; I’ll have to give it some more thought.
Anyway, after securing my data from my Synology DS923+ and moving it to my TrueNAS Scale zpool, I wanted to take it a step further: why not link the two Proxmox servers in a cluster?
However, a cluster with a 10-year-old Xeon and a relatively new AMD CPU seemed unwise for running VMs on the hosts (CPU). I’d have to set everything to x86-64-v2-AES for safety, and I can forget about live migration anyway.( cold migration yes I can do).
"By forcing a modern AMD CPU to behave like an old Xeon (via v2-AES), you lose access to modern accelerators like AVX-512 or other performance enhancements that the new chip offers."
So, I’m putting that idea on ice for now. Since my Aoostar "passes through" all drives to the TrueNAS Scale server, I decided it would be wiser to store the PBS backups on a local NVMe rather than on the TrueNAS zpool. The thought of something happening to TrueNAS or its pool gave me the jitters. I know you can import a zpool into a new TrueNAS server without an explicit export—perhaps losing only the last few seconds of data—but still.
To avoid cluttering my desk with hardware, I decided to also run PBS on my existing Lenovo P520 Proxmox server. Not to backup from there, but to use the datastore to sync backups from the Aoostar’s NVMe drive to a directory on my zpool there. You might think I’m crazy, but my most critical data is also stored on an external hard drive and secured in the cloud. Am I paranoid?
Tomorrow, my DS923+ is going to a new owner. After 14 years of Synology, I’m saying goodbye. DSM is great, but the hardware is sh*t. Even though they updated their hard drive policy, it still doesn't apply to NVMes out of the box (I know workarounds exist). In my opinion, you should give the buyer a free choice instead of forcing them to buy Synology-branded products.
For now, I’ll leave everything running as is to see if it remains stable—no doubt a new challenge will bubble up soon. First, I need to order some longer DAC cables.
After further research i found out that
cat /proc/cpuinfo | grep flags | head -1
lib64/ld-linux-x86-64.so.2 --help | grep supported
Both CPUs support the x86-64-v4 standard. This is great news, as it means they both support AVX-512.
Based on my flags, here is the overlap:
  • v3 support: Both feature avx2, bmi1, bmi2, fma, and movbe.
  • v4 support: Both feature avx512f, avx512dq, avx512bw, avx512vl, and avx512cd.
  • Security: Both feature aes, ssbd, ibrs, ibpb, and stibp.
For maximum Live Migration compatibility:
Set the VMs to x86-64-v4. "I guess I shouldn't always trust my first instinct."
Current results and conclusion:
By migrating the data from the Synology DS923+ (SHR 4x 16TB) to the Aoostar—using Proxmox as the host OS running a TrueNAS SCALE VM (32GB RAM)—the data transfer speeds have increased by 250–300 Mbps on the ZFS pool.
Wishing you all a Happy Easter, fellow home-labbers!
4
3 comments
Ad de Jonge
3
New toy new possibilities
Home Lab Explorers
skool.com/homelabexplorers
Build, break, and master home labs and the technologies behind them! Dive into self-hosting, Docker, Kubernetes, DevOps, virtualization, and beyond.
Leaderboard (30-day)
Powered by