BREAKING: NVIDIA just dropped an open 30B model that beats GPT-OSS and Qwen3-30B - and runs 2.2-3.3× faster
Nemotron 3 Nano:
• Up to 1M-token context
MoE: 31.6B total params, 3.6B active
Best-in-class performance for
SWE-Bench
Open weights + training recipe + redistributable datasets
You can run the model locally with 24GB RAM.